Este contenido no está disponible en el idioma seleccionado.

Deploying Red Hat OpenStack Services on OpenShift


Red Hat OpenStack Services on OpenShift 18.0

Deploying a Red Hat OpenStack Services on OpenShift environment on a Red Hat OpenShift Container Platform cluster

OpenStack Documentation Team

Abstract

Learn how to install the OpenStack Operator for Red Hat OpenStack Services on OpenShift (RHOSO), create a RHOSO control plane on a Red Hat OpenShift Container Platform cluster, and use the OpenStack Operator to deploy a RHOSO data plane.

Providing feedback on Red Hat documentation

We appreciate your input on our documentation. Tell us how we can make it better.

Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback.

To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com.

  1. Click the following link to open a Create Issue page: Create Issue
  2. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form.
  3. Click Create.

Chapter 1. Installing and preparing the Operators

You install the Red Hat OpenStack Services on OpenShift (RHOSO) OpenStack Operator (openstack-operator) and create the RHOSO control plane on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. You install the OpenStack Operator by using the RHOCP web console. You perform the control plane installation tasks and all data plane creation tasks on a workstation that has access to the RHOCP cluster.

For information about mapping RHOSO versions to OpenStack Operators and OpenStackVersion Custom Resources (CRs), see the Red Hat knowledge base article at https://access.redhat.com/articles/7125383.

1.1. Prerequisites

1.2. Installing the OpenStack Operator

You use OperatorHub on the Red Hat OpenShift Container Platform (RHOCP) web console to install the OpenStack Operator (openstack-operator) on your RHOCP cluster. After you install the Operator, you configure a single instance of the OpenStack Operator initialization resource to start the OpenStack Operator on your cluster.

Procedure

  1. Log in to the RHOCP web console as a user with cluster-admin permissions.
  2. Select Operators → OperatorHub.
  3. In the Filter by keyword field, type OpenStack.
  4. Click the OpenStack Operator tile with the Red Hat source label.
  5. Read the information about the Operator and click Install.
  6. On the Install Operator page, select "Operator recommended Namespace: openstack-operators" from the Installed Namespace list.
  7. On the Install Operator page, select "Manual" from the Update approval list. For information about how to manually approve a pending Operator update, see Manually approving a pending Operator update in the RHOCP Operators guide.
  8. Click Install to make the Operator available to the openstack-operators namespace. The OpenStack Operator is installed when the Status is Succeeded.
  9. Click Create OpenStack to open the Create OpenStack page.
  10. On the Create OpenStack page, click Create to create an instance of the OpenStack Operator initialization resource. The OpenStack Operator is ready to use when the Status of the openstack instance is Conditions: Ready.

You install Red Hat OpenStack Services on OpenShift (RHOSO) on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. To prepare for installing and deploying your RHOSO environment, you must configure the RHOCP worker nodes and the RHOCP networks on your RHOCP cluster.

Red Hat OpenStack Services on OpenShift (RHOSO) services run on Red Hat OpenShift Container Platform (RHOCP) worker nodes. By default, the OpenStack Operator deploys RHOSO services on any worker node. You can use node labels in your OpenStackControlPlane custom resource (CR) to specify which RHOCP nodes host the RHOSO services. By pinning some services to specific infrastructure nodes rather than running the services on all of your RHOCP worker nodes, you optimize the performance of your deployment.

You can create new labels for the RHOCP nodes, or you can use the existing labels, and then specify those labels in the OpenStackControlPlane CR by using the nodeSelector field. For example, the Block Storage service (cinder) has different requirements for each of its services:

  • The cinder-scheduler service is a very light service with low memory, disk, network, and CPU usage.
  • The cinder-api service has high network usage due to resource listing requests.
  • The cinder-volume service has high disk and network usage because many of its operations are in the data path, such as offline volume migration, and creating a volume from an image.
  • The cinder-backup service has high memory, network, and CPU requirements.

Therefore, you can pin the cinder-api, cinder-volume, and cinder-backup services to dedicated nodes and let the OpenStack Operator place the cinder-scheduler service on a node that has capacity.

Tip

Alternatively, you can create Topology CRs and use the topologyRef field in your OpenStackControlPlane CR to control service pod placement after your RHOCP cluster has been prepared. For more information, see Controlling service pod placement with Topology CRs.

2.2. Creating a storage class

You must create a storage class for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end to provide persistent volumes to Red Hat OpenStack Services on OpenShift (RHOSO) pods. You specify this storage class as the cluster storage back end for the RHOSO control plane deployment. Use a storage back end based on SSD or NVMe drives for the storage class.

If you do not have an existing storage class that can provide persistent volumes, you can use the Logical Volume Manager Storage Operator to provide a storage class for RHOSO. If you are using LVM, you must wait until the LVM Storage Operator announces that the storage is available before creating the control plane. The LVM Storage Operator announces that the cluster and LVMS storage configuration is complete through the annotation for the volume group to the worker node object. If you deploy pods before all the control plane nodes are ready, then multiple PVCs and pods are scheduled on the same nodes.

To check that the storage is ready, you can query the nodes in your lvmclusters.lvm.topolvm.io object. For example, run the following command if you have three worker nodes and your device class for the LVM Storage Operator is named "local-storage":

# oc get node -l "topology.topolvm.io/node in ($(oc get nodes -l node-role.kubernetes.io/worker -o name | cut -d '/' -f 2 | tr '\n' ',' | sed 's/.\{1\}$//'))" -o=jsonpath='{.items[*].metadata.annotations.capacity\.topolvm\.io/local-storage}' | tr ' ' '\n'
Copy to Clipboard Toggle word wrap

The storage is ready when this command returns three non-zero values.

2.3. Creating the openstack namespace

You must create a namespace within your Red Hat OpenShift Container Platform (RHOCP) environment for the service pods of your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. The service pods of each RHOSO deployment exist in their own namespace within the RHOCP environment.

Prerequisites

  • You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges.

Procedure

  1. Create the openstack project for the deployed RHOSO environment:

    $ oc new-project openstack
    Copy to Clipboard Toggle word wrap
  2. Ensure the openstack namespace is labeled to enable privileged pod creation by the OpenStack Operators:

    $ oc get namespace openstack -ojsonpath='{.metadata.labels}' | jq
    {
      "kubernetes.io/metadata.name": "openstack",
      "pod-security.kubernetes.io/enforce": "privileged",
      "security.openshift.io/scc.podSecurityLabelSync": "false"
    }
    Copy to Clipboard Toggle word wrap

    If the security context constraint (SCC) is not "privileged", use the following commands to change it:

    $ oc label ns openstack security.openshift.io/scc.podSecurityLabelSync=false --overwrite
    $ oc label ns openstack pod-security.kubernetes.io/enforce=privileged --overwrite
    Copy to Clipboard Toggle word wrap
  3. Optional: To remove the need to specify the namespace when executing commands on the openstack namespace, set the default namespace to openstack:

    $ oc project openstack
    Copy to Clipboard Toggle word wrap

You must create a Secret custom resource (CR) to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods. The following procedure creates a Secret CR with the required password formats for each service.

For an example Secret CR that generates the required passwords and fernet key for you, see Example Secret CR for secure access to the RHOSO service pods.

Warning

You cannot change a service password once the control plane is deployed. If a service password is changed in osp-secret after deploying the control plane, the service is reconfigured to use the new password but the password is not updated in the Identity service (keystone). This results in a service outage.

Prerequisites

  • You have installed python3-cryptography.

Procedure

  1. Create a Secret CR on your workstation, for example, openstack_service_secret.yaml.
  2. Add the following initial configuration to openstack_service_secret.yaml:

    apiVersion: v1
    data:
      AdminPassword: <base64_password>
      AodhPassword: <base64_password>
      AodhDatabasePassword: <base64_password>
      BarbicanDatabasePassword: <base64_password>
      BarbicanPassword: <base64_password>
      BarbicanSimpleCryptoKEK: <base64_fernet_key>
      CeilometerPassword: <base64_password>
      CinderDatabasePassword: <base64_password>
      CinderPassword: <base64_password>
      DatabasePassword: <base64_password>
      DbRootPassword: <base64_password>
      DesignateDatabasePassword: <base64_password>
      DesignatePassword: <base64_password>
      GlanceDatabasePassword: <base64_password>
      GlancePassword: <base64_password>
      HeatAuthEncryptionKey: <base64_password>
      HeatDatabasePassword: <base64_password>
      HeatPassword: <base64_password>
      IronicDatabasePassword: <base64_password>
      IronicInspectorDatabasePassword: <base64_password>
      IronicInspectorPassword: <base64_password>
      IronicPassword: <base64_password>
      KeystoneDatabasePassword: <base64_password>
      ManilaDatabasePassword: <base64_password>
      ManilaPassword: <base64_password>
      MetadataSecret: <base64_password>
      NeutronDatabasePassword: <base64_password>
      NeutronPassword: <base64_password>
      NovaAPIDatabasePassword: <base64_password>
      NovaAPIMessageBusPassword: <base64_password>
      NovaCell0DatabasePassword: <base64_password>
      NovaCell0MessageBusPassword: <base64_password>
      NovaCell1DatabasePassword: <base64_password>
      NovaCell1MessageBusPassword: <base64_password>
      NovaPassword: <base64_password>
      OctaviaDatabasePassword: <base64_password>
      OctaviaPassword: <base64_password>
      PlacementDatabasePassword: <base64_password>
      PlacementPassword: <base64_password>
      SwiftPassword: <base64_password>
    kind: Secret
    metadata:
      name: osp-secret
      namespace: openstack
    type: Opaque
    Copy to Clipboard Toggle word wrap
    • Replace <base64_password> with a 32-character key that is base64 encoded.

      Note

      The HeatAuthEncryptionKey password must be a 32-character key for Orchestration service (heat) encryption. If you increase the length of the passwords for all other services, ensure that the HeatAuthEncryptionKey password remains at length 32.

      You can use the following command to manually generate a base64 encoded password:

      $ echo -n <password> | base64
      Copy to Clipboard Toggle word wrap

      Alternatively, if you are using a Linux workstation and you are generating the Secret CR by using a Bash command such as cat, you can replace <base64_password> with the following command to auto-generate random passwords for each service:

      $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
      Copy to Clipboard Toggle word wrap
    • Replace the <base64_fernet_key> with a base64 encoded fernet key. You can use the following command to manually generate it:

      $(python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))" | base64)
      Copy to Clipboard Toggle word wrap
  3. Create the Secret CR in the cluster:

    $ oc create -f openstack_service_secret.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  4. Verify that the Secret CR is created:

    $ oc describe secret osp-secret -n openstack
    Copy to Clipboard Toggle word wrap

2.4.1. Example Secret CR for secure access to the RHOSO service pods

You must create a Secret custom resource (CR) file to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods.

If you are using a Linux workstation, you can create a Secret CR file called openstack_service_secret.yaml by using the following Bash cat command that generates the required passwords and fernet key for you:

$ cat <<EOF > openstack_service_secret.yaml
apiVersion: v1
data:
  AdminPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  AodhPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  AodhDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  BarbicanDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  BarbicanPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  BarbicanSimpleCryptoKEK: $(python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))" | base64)
  CeilometerPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  CinderDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  CinderPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  DatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  DbRootPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  DesignateDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  DesignatePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  GlanceDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  GlancePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  HeatAuthEncryptionKey: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  HeatDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  HeatPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  IronicDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  IronicInspectorDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  IronicInspectorPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  IronicPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  KeystoneDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  ManilaDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  ManilaPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  MetadataSecret: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  NeutronDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  NeutronPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  NovaAPIDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  NovaAPIMessageBusPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  NovaCell0DatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  NovaCell0MessageBusPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  NovaCell1DatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  NovaCell1MessageBusPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  NovaPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  OctaviaDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  OctaviaPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  PlacementDatabasePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  PlacementPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  SwiftPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
kind: Secret
metadata:
  name: osp-secret
  namespace: openstack
type: Opaque
EOF
Copy to Clipboard Toggle word wrap

To prepare for configuring and deploying your Red Hat OpenStack Services on OpenShift (RHOSO) environment, you must configure the Red Hat OpenShift Container Platform (RHOCP) networks on your RHOCP cluster.

Note

If you need a centralized gateway for connection to external networks, you can add OVN gateways to the control plane or to dedicated Networker nodes on the data plane. For information about adding optional OVN gateways to the control plane, see Configuring OVN gateways for a Red Hat OpenStack Services on OpenShift deployment.

3.1. Default Red Hat OpenStack Services on OpenShift networks

The following physical data center networks are typically implemented for a Red Hat OpenStack Services on OpenShift (RHOSO) deployment:

  • Control plane network: used by the OpenStack Operator for Ansible SSH access to deploy and connect to the data plane nodes from the Red Hat OpenShift Container Platform (RHOCP) environment. This network is also used by data plane nodes for live migration of instances.
  • External network: (optional) used when required for your environment. For example, you might create an external network for any of the following purposes:

    • To provide virtual machine instances with Internet access.
    • To create flat provider networks that are separate from the control plane.
    • To configure VLAN provider networks on a separate bridge from the control plane.
    • To provide access to virtual machine instances with floating IPs on a network other than the control plane network.
  • Internal API network: used for internal communication between RHOSO components.
  • Storage network: used for block storage, RBD, NFS, FC, and iSCSI.
  • Tenant (project) network: used for data communication between virtual machine instances within the cloud deployment.
  • Octavia controller network: used to connect Load-balancing service (octavia) controllers running in the control plane.
  • Designate network: used internally by designate to manage the DNS servers.
  • Designateext network: used to provide external access to the DNS service resolver and the DNS servers.
  • Storage Management network: (optional) used by storage components. For example, Red Hat Ceph Storage uses the Storage Management network in a hyperconverged infrastructure (HCI) environment as the cluster_network to replicate data.

    Note

    For more information on Red Hat Ceph Storage network configuration, see Ceph network configuration in the Red Hat Ceph Storage Configuration Guide.

The following table details the default networks used in a RHOSO deployment. If required, you can update the networks for your environment.

Note

By default, the control plane and external networks do not use VLANs. Networks that do not use VLANs must be placed on separate NICs. You can use a VLAN for the control plane network on new RHOSO deployments. You can also use the Native VLAN on a trunked interface as the non-VLAN network. For example, you can have the control plane and the internal API on one NIC, and the external network with no VLAN on a separate NIC.

Expand
Table 3.1. Default RHOSO networks
Network nameCIDRNetConfig allocationRangeMetalLB IPAddressPool rangenet-attach-def ipam rangeOCP worker nncp range

ctlplane

192.168.122.0/24

192.168.122.100 - 192.168.122.250

192.168.122.80 - 192.168.122.90

192.168.122.30 - 192.168.122.70

192.168.122.10 - 192.168.122.20

external

10.0.0.0/24

10.0.0.100 - 10.0.0.250

n/a

n/a

n/a

internalapi

172.17.0.0/24

172.17.0.100 - 172.17.0.250

172.17.0.80 - 172.17.0.90

172.17.0.30 - 172.17.0.70

172.17.0.10 - 172.17.0.20

storage

172.18.0.0/24

172.18.0.100 - 172.18.0.250

n/a

172.18.0.30 - 172.18.0.70

172.18.0.10 - 172.18.0.20

tenant

172.19.0.0/24

172.19.0.100 - 172.19.0.250

n/a

172.19.0.30 - 172.19.0.70

172.19.0.10 - 172.19.0.20

octavia

172.23.0.0/24

n/a

n/a

172.23.0.30 - 172.23.0.70

n/a

designate

172.26.0.0/24

n/a

n/a

172.26.0.30 - 172.26.0.70

172.26.0.10 - 172.26.0.20

designateext

172.34.0.0/24

n/a

172.34.0.80 - 172.34.0.120

172.34.0.30 - 172.34.0.70

172.34.0.10 - 172.34.0.20

storageMgmt

172.20.0.0/24

172.20.0.100 - 172.20.0.250

n/a

172.20.0.30 - 172.20.0.70

172.20.0.10 - 172.20.0.20

The following table specifies the networks that establish connectivity to the fabric using eth2 and eth3 with different IP addresses per zone and rack and also a global bgpmainnet that is used as a source for the traffic:

Expand
Table 3.2. Zone connectivity
Network nameZone 0Zone 1Zone 2

BGP Net1 (eth2)

100.64.0.0/24

100.64.1.0

100.64.2.0

BGP Net2 (eth3)

100.65.0.0/24

100.65.1.0/24

100.65.2.0

Bgpmainnet (loopback)

99.99.0.0/24

99.99.1.0/24

99.99.2.0/24

3.2. Preparing RHOCP for RHOSO networks

The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. A RHOSO environment uses isolated networks to separate different types of network traffic, which improves security, performance, and management. You must connect the RHOCP worker nodes to your isolated networks and expose the internal service endpoints on the isolated networks. The public service endpoints are exposed as RHOCP routes by default, because only routes are supported for public endpoints.

Important

The control plane interface name must be consistent across all nodes because network manifests reference the control plane interface name directly. If the control plane interface names are inconsistent, then the RHOSO environment fails to deploy. If the physical interface names are inconsistent on the nodes, you must create a Linux bond that configures a consistent alternative name for the physical interfaces that can be referenced by the other network manifests.

Note

The examples in the following procedures use IPv4 addresses. You can use IPv6 addresses instead of IPv4 addresses. Dual stack IPv4/6 is not available. For information about how to configure IPv6 addresses, see the following resources in the RHOCP Networking guide:

3.2.1. Preparing RHOCP with isolated network interfaces

You use the NMState Operator to connect the RHOCP worker nodes to your isolated networks. Create a NodeNetworkConfigurationPolicy (nncp) CR to configure the interfaces for each isolated network on each worker node in RHOCP cluster.

Procedure

  1. Create a NodeNetworkConfigurationPolicy (nncp) CR file on your workstation, for example, openstack-nncp.yaml.
  2. Retrieve the names of the worker nodes in the RHOCP cluster:

    $ oc get nodes -l node-role.kubernetes.io/worker -o jsonpath="{.items[*].metadata.name}"
    Copy to Clipboard Toggle word wrap
  3. Discover the network configuration:

    $ oc get nns/<worker_node> -o yaml | more
    Copy to Clipboard Toggle word wrap
    • Replace <worker_node> with the name of a worker node retrieved in step 2, for example, worker-1. Repeat this step for each worker node.
  4. In the nncp CR file, configure the interfaces for each isolated network on each worker node in the RHOCP cluster. For information about the default physical data center networks that must be configured with network isolation, see Default Red Hat OpenStack Services on OpenShift networks.

    In the following example, the nncp CR configures the enp6s0 interface for worker node 1, osp-enp6s0-worker-1, to use VLAN interfaces with IPv4 addresses for network isolation:

    apiVersion: nmstate.io/v1
    kind: NodeNetworkConfigurationPolicy
    metadata:
      name: osp-enp6s0-worker-1
    spec:
      desiredState:
        interfaces:
        - description: internalapi vlan interface
          ipv4:
            address:
            - ip: 172.17.0.10
              prefix-length: 24
            enabled: true
            dhcp: false
          ipv6:
            enabled: false
          name: internalapi
          state: up
          type: vlan
          vlan:
            base-iface: enp6s0
            id: 20
            reorder-headers: true
        - description: storage vlan interface
          ipv4:
            address:
            - ip: 172.18.0.10
              prefix-length: 24
            enabled: true
            dhcp: false
          ipv6:
            enabled: false
          name: storage
          state: up
          type: vlan
          vlan:
            base-iface: enp6s0
            id: 21
            reorder-headers: true
        - description: tenant vlan interface
          ipv4:
            address:
            - ip: 172.19.0.10
              prefix-length: 24
            enabled: true
            dhcp: false
          ipv6:
            enabled: false
          name: tenant
          state: up
          type: vlan
          vlan:
            base-iface: enp6s0
            id: 22
            reorder-headers: true
        - description: Configuring enp6s0
          ipv4:
            address:
            - ip: 192.168.122.10
              prefix-length: 24
            enabled: true
            dhcp: false
          ipv6:
            enabled: false
          mtu: 1500
          name: enp6s0
          state: up
          type: ethernet
        - description: octavia vlan interface
          name: octavia
          state: up
          type: vlan
          vlan:
            base-iface: enp6s0
            id: 24
            reorder-headers: true
        - bridge:
            options:
              stp:
                enabled: false
            port:
            - name: enp6s0.24
          description: Configuring bridge octbr
          mtu: 1500
          name: octbr
          state: up
          type: linux-bridge
        - description: designate vlan interface
          ipv4:
            address:
            - ip: 172.26.0.10
              prefix-length: "24"
            dhcp: false
            enabled: true
          ipv6:
            enabled: false
          mtu: 1500
          name: designate
          state: up
          type: vlan
          vlan:
            base-iface: enp7s0
            id: "25"
            reorder-headers: true
        - description: designate external vlan interface
          ipv4:
            address:
            - ip: 172.34.0.10
              prefix-length: "24"
            dhcp: false
            enabled: true
          ipv6:
            enabled: false
          mtu: 1500
          name: designateext
          state: up
          type: vlan
          vlan:
            base-iface: enp7s0
            id: "26"
            reorder-headers: true
      nodeSelector:
        kubernetes.io/hostname: worker-1
        node-role.kubernetes.io/worker: ""
    Copy to Clipboard Toggle word wrap
  5. Create the nncp CR in the cluster:

    $ oc apply -f openstack-nncp.yaml
    Copy to Clipboard Toggle word wrap
  6. Verify that the nncp CR is created:

    $ oc get nncp -w
    NAME                        STATUS        REASON
    osp-enp6s0-worker-1   Progressing   ConfigurationProgressing
    osp-enp6s0-worker-1   Progressing   ConfigurationProgressing
    osp-enp6s0-worker-1   Available     SuccessfullyConfigured
    Copy to Clipboard Toggle word wrap

3.2.2. Attaching service pods to the isolated networks

Create a NetworkAttachmentDefinition (net-attach-def) custom resource (CR) for each isolated network to attach the service pods to the networks.

Procedure

  1. Create a NetworkAttachmentDefinition (net-attach-def) CR file on your workstation, for example, openstack-net-attach-def.yaml.
  2. In the NetworkAttachmentDefinition CR file, configure a NetworkAttachmentDefinition resource for each isolated network to attach a service deployment pod to the network. The following examples create a NetworkAttachmentDefinition resource for the following networks:

    • internalapi, storage, ctlplane, and tenant networks of type macvlan.
    • octavia, the load-balancing management network, of type bridge. This network attachment connects pods that manage load balancer virtual machines (amphorae) and the Open vSwitch pods that are managed by the OVN operator.
    • designate network used internally by the DNS service (designate) to manage the DNS servers.
    • designateext network used to provide external access to the DNS service resolver and the DNS servers.
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: internalapi
      namespace: openstack
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "internalapi",
          "type": "macvlan",
          "master": "internalapi",
          "ipam": {
            "type": "whereabouts",
            "range": "172.17.0.0/24",
            "range_start": "172.17.0.30",
            "range_end": "172.17.0.70"
          }
        }
    ---
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: ctlplane
      namespace: openstack
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "ctlplane",
          "type": "macvlan",
          "master": "enp6s0",
          "ipam": {
            "type": "whereabouts",
            "range": "192.168.122.0/24",
            "range_start": "192.168.122.30",
            "range_end": "192.168.122.70"
          }
        }
    ---
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: storage
      namespace: openstack
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "storage",
          "type": "macvlan",
          "master": "storage",
          "ipam": {
            "type": "whereabouts",
            "range": "172.18.0.0/24",
            "range_start": "172.18.0.30",
            "range_end": "172.18.0.70"
          }
        }
    ---
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: tenant
      namespace: openstack
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "tenant",
          "type": "macvlan",
          "master": "tenant",
          "ipam": {
            "type": "whereabouts",
            "range": "172.19.0.0/24",
            "range_start": "172.19.0.30",
            "range_end": "172.19.0.70"
          }
        }
    ---
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      labels:
        osp/net: octavia
      name: octavia
      namespace: openstack
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "octavia",
          "type": "bridge",
          "bridge": "octbr",
          "ipam": {
            "type": "whereabouts",
            "range": "172.23.0.0/24",
            "range_start": "172.23.0.30",
            "range_end": "172.23.0.70",
            "routes": [
               {
                 "dst": "172.24.0.0/16",
                 "gw" : "172.23.0.150"
               }
             ]
          }
        }
    ---
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: designate
      namespace: openstack
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "designate",
          "type": "macvlan",
          "master": "designate",
          "ipam": {
            "type": "whereabouts",
            "range": "172.26.0.0/16",
            "range_start": "172.26.0.30",
            "range_end": "172.26.0.70",
          }
        }
    ---
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: designateext
      namespace: openstack
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "designateext",
          "type": "macvlan",
          "master": "designateext",
          "ipam": {
            "type": "whereabouts",
            "range": "172.34.0.0/16",
            "range_start": "172.34.0.30",
            "range_end": "172.34.0.70",
          }
        }
    Copy to Clipboard Toggle word wrap
    • metadata.namespace: The namespace where the services are deployed.
    • "master": The node interface name associated with the network, as defined in the nncp CR.
    • "ipam": The whereabouts CNI IPAM plug-in assigns IPs to the created pods from the range .30 - .70.
    • "range_start" - "range_end": The IP address pool range must not overlap with the MetalLB IPAddressPool range and the NetConfig allocationRange.
  3. Create the NetworkAttachmentDefinition CR in the cluster:

    $ oc apply -f openstack-net-attach-def.yaml
    Copy to Clipboard Toggle word wrap
  4. Verify that the NetworkAttachmentDefinition CR is created:

    $ oc get net-attach-def -n openstack
    Copy to Clipboard Toggle word wrap

3.2.3. Preparing RHOCP for RHOSO network VIPS

You use the MetalLB Operator to expose internal service endpoints on the isolated networks. You must create an L2Advertisement resource to define how the Virtual IPs (VIPs) are announced, and an IPAddressPool resource to configure which IPs can be used as VIPs. In layer 2 mode, one node assumes the responsibility of advertising a service to the local network.

Procedure

  1. Create an IPAddressPool CR file on your workstation, for example, openstack-ipaddresspools.yaml.
  2. In the IPAddressPool CR file, configure an IPAddressPool resource on the isolated network to specify the IP address ranges over which MetalLB has authority:

    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      namespace: metallb-system
      name: ctlplane
    spec:
      addresses:
        - 192.168.122.80-192.168.122.90
      autoAssign: true
      avoidBuggyIPs: false
    ---
    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      name: internalapi
      namespace: metallb-system
    spec:
      addresses:
        - 172.17.0.80-172.17.0.90
      autoAssign: true
      avoidBuggyIPs: false
    ---
    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      namespace: metallb-system
      name: storage
    spec:
      addresses:
        - 172.18.0.80-172.18.0.90
      autoAssign: true
      avoidBuggyIPs: false
    ---
    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      namespace: metallb-system
      name: tenant
    spec:
      addresses:
        - 172.19.0.80-172.19.0.90
      autoAssign: true
      avoidBuggyIPs: false
    ---
    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      namespace: metallb-system
      name: designateext
    spec:
      addresses:
        - 172.34.0.80-172.34.0.120
      autoAssign: true
      avoidBuggyIPs: false
    ---
    Copy to Clipboard Toggle word wrap
    • spec.addresses: The IPAddressPool range must not overlap with the whereabouts IPAM range and the NetConfig allocationRange.

    For information about how to configure the other IPAddressPool resource parameters, see Configuring MetalLB address pools in the RHOCP Networking guide.

  3. Create the IPAddressPool CR in the cluster:

    $ oc apply -f openstack-ipaddresspools.yaml
    Copy to Clipboard Toggle word wrap
  4. Verify that the IPAddressPool CR is created:

    $ oc describe -n metallb-system IPAddressPool
    Copy to Clipboard Toggle word wrap
  5. Create a L2Advertisement CR file on your workstation, for example, openstack-l2advertisement.yaml.
  6. In the L2Advertisement CR file, configure L2Advertisement CRs to define which node advertises a service to the local network. Create one L2Advertisement resource for each network.

    In the following example, each L2Advertisement CR specifies that the VIPs requested from the network address pools are announced on the interface that is attached to the VLAN:

    apiVersion: metallb.io/v1beta1
    kind: L2Advertisement
    metadata:
      name: ctlplane
      namespace: metallb-system
    spec:
      ipAddressPools:
      - ctlplane
      interfaces:
      - enp6s0
      nodeSelectors:
      - matchLabels:
          node-role.kubernetes.io/worker: ""
    ---
    apiVersion: metallb.io/v1beta1
    kind: L2Advertisement
    metadata:
      name: internalapi
      namespace: metallb-system
    spec:
      ipAddressPools:
      - internalapi
      interfaces:
      - internalapi
      nodeSelectors:
      - matchLabels:
          node-role.kubernetes.io/worker: ""
    ---
    apiVersion: metallb.io/v1beta1
    kind: L2Advertisement
    metadata:
      name: storage
      namespace: metallb-system
    spec:
      ipAddressPools:
      - storage
      interfaces:
      - storage
      nodeSelectors:
      - matchLabels:
          node-role.kubernetes.io/worker: ""
    ---
    apiVersion: metallb.io/v1beta1
    kind: L2Advertisement
    metadata:
      name: tenant
      namespace: metallb-system
    spec:
      ipAddressPools:
      - tenant
      interfaces:
      - tenant
      nodeSelectors:
      - matchLabels:
          node-role.kubernetes.io/worker: ""
    ---
    apiVersion: metallb.io/v1beta1
    kind: L2Advertisement
    metadata:
      name: designateext
      namespace: metallb-system
    spec:
      ipAddressPools:
      - designateext
      interfaces:
      - designateext
      nodeSelectors:
      - matchLabels:
          node-role.kubernetes.io/worker: ""
    Copy to Clipboard Toggle word wrap
    • spec.interfaces: The interface where the VIPs requested from the VLAN address pool are announced.

    For information about how to configure the other L2Advertisement resource parameters, see Configuring MetalLB with a L2 advertisement and label in the RHOCP Networking guide.

  7. Create the L2Advertisement CRs in the cluster:

    $ oc apply -f openstack-l2advertisement.yaml
    Copy to Clipboard Toggle word wrap
  8. Verify that the L2Advertisement CRs are created:

    $ oc get -n metallb-system L2Advertisement
    NAME          IPADDRESSPOOLS    IPADDRESSPOOL SELECTORS   INTERFACES
    ctlplane      ["ctlplane"]                                ["enp6s0"]
    designateext  ["designateext"]                            ["designateext"]
    internalapi   ["internalapi"]                             ["internalapi"]
    storage       ["storage"]                                 ["storage"]
    tenant        ["tenant"]                                  ["tenant"]
    Copy to Clipboard Toggle word wrap
  9. If your cluster has OVNKubernetes as the network back end, then you must enable global forwarding so that MetalLB can work on a secondary network interface.

    1. Check the network back end used by your cluster:

      $ oc get network.operator cluster --output=jsonpath='{.spec.defaultNetwork.type}'
      Copy to Clipboard Toggle word wrap
    2. If the back end is OVNKubernetes, then run the following command to enable global IP forwarding:

      $ oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}' --type=merge
      Copy to Clipboard Toggle word wrap

3.3. Creating the data plane network

To create the data plane network, you define a NetConfig custom resource (CR) and specify all the subnets for the data plane networks. You must define at least one control plane network for your data plane. You can also define VLAN networks to create network isolation for composable networks, such as InternalAPI, Storage, and External. Each network definition must include the IP address assignment.

Tip

Use the following commands to view the NetConfig CRD definition and specification schema:

$ oc describe crd netconfig

$ oc explain netconfig.spec
Copy to Clipboard Toggle word wrap

Procedure

  1. Create a file named openstack_netconfig.yaml on your workstation.
  2. Add the following configuration to openstack_netconfig.yaml to create the NetConfig CR:

    apiVersion: network.openstack.org/v1beta1
    kind: NetConfig
    metadata:
      name: openstacknetconfig
      namespace: openstack
    Copy to Clipboard Toggle word wrap
  3. In the openstack_netconfig.yaml file, define the topology for each data plane network. To use the default Red Hat OpenStack Services on OpenShift (RHOSO) networks, you must define a specification for each network. For information about the default RHOSO networks, see Default Red Hat OpenStack Services on OpenShift networks. The following example creates isolated networks for the data plane:

    spec:
      networks:
      - name: CtlPlane
        dnsDomain: ctlplane.example.com
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 192.168.122.120
            start: 192.168.122.100
          - end: 192.168.122.200
            start: 192.168.122.150
          cidr: 192.168.122.0/24
          gateway: 192.168.122.1
      - name: InternalApi
        dnsDomain: internalapi.example.com
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 172.17.0.250
            start: 172.17.0.100
          excludeAddresses:
          - 172.17.0.10
          - 172.17.0.12
          cidr: 172.17.0.0/24
          vlan: 20
      - name: External
        dnsDomain: external.example.com
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 10.0.0.250
            start: 10.0.0.100
          cidr: 10.0.0.0/24
          gateway: 10.0.0.1
      - name: Storage
        dnsDomain: storage.example.com
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 172.18.0.250
            start: 172.18.0.100
          cidr: 172.18.0.0/24
          vlan: 21
      - name: Tenant
        dnsDomain: tenant.example.com
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 172.19.0.250
            start: 172.19.0.100
          cidr: 172.19.0.0/24
          vlan: 22
    Copy to Clipboard Toggle word wrap
    • spec.networks.name: The name of the network, for example, CtlPlane.
    • spec.networks.subnets: The IPv4 subnet specification.
    • spec.networks.subnets.name: The name of the subnet, for example, subnet1.
    • spec.networks.subnets.allocationRanges: The NetConfig allocationRange. The allocationRange must not overlap with the MetalLB IPAddressPool range and the IP address pool range.
    • spec.networks.subnets.excludeAddresses: Optional: List of IP addresses from the allocation range that must not be used by data plane nodes.
    • spec.networks.subnets.vlan: The network VLAN. For information about the default RHOSO networks, see Default Red Hat OpenStack Services on OpenShift networks.
  4. Save the openstack_netconfig.yaml definition file.
  5. Create the data plane network:

    $ oc create -f openstack_netconfig.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  6. To verify that the data plane network is created, view the openstacknetconfig resource:

    $ oc get netconfig/openstacknetconfig -n openstack
    Copy to Clipboard Toggle word wrap

    If you see errors, check the underlying network-attach-definition and node network configuration policies:

    $ oc get network-attachment-definitions -n openstack
    $ oc get nncp
    Copy to Clipboard Toggle word wrap

3.4. Additional resources

Chapter 4. Creating the control plane

The Red Hat OpenStack Services on OpenShift (RHOSO) control plane contains the RHOSO services that manage the cloud. The RHOSO services run as a Red Hat OpenShift Container Platform (RHOCP) workload.

Note

Creating the control plane also creates an OpenStackClient pod that you can access through a remote shell (rsh) to run OpenStack CLI commands.

4.1. Prerequisites

  • The OpenStack Operator (openstack-operator) is installed. For more information, see Installing and preparing the Operators.
  • The RHOCP cluster is prepared for RHOSO networks. For more information, see Preparing RHOCP for RHOSO networks.
  • The RHOCP cluster is not configured with any network policies that prevent communication between the openstack-operators namespace and the control plane namespace (default openstack). Use the following command to check the existing network policies on the cluster:

    $ oc get networkpolicy -n openstack
    Copy to Clipboard Toggle word wrap

    This command returns the message "No resources found in openstack namespace" when there are no network policies. If this command returns a list of network policies, then check that they do not prevent communication between the openstack-operators namespace and the control plane namespace. For more information about network policies, see Network security in the RHOCP Networking guide.

  • You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges.

4.2. Creating the control plane

You must define an OpenStackControlPlane custom resource (CR) to create the control plane and enable the Red Hat OpenStack Services on OpenShift (RHOSO) services.

The following procedure creates an initial control plane with the recommended configurations for each service. The procedure helps you quickly create an operating control plane environment that you can use to troubleshoot issues and test the environment before adding all the customizations you require. You can add service customizations to a deployed environment. For more information about how to customize your control plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.

For an example OpenStackControlPlane CR, see Example OpenStackControlPlane CR.

Tip

Use the following commands to view the OpenStackControlPlane CRD definition and specification schema:

$ oc describe crd openstackcontrolplane

$ oc explain openstackcontrolplane.spec
Copy to Clipboard Toggle word wrap

Procedure

  1. Create a file on your workstation named openstack_control_plane.yaml to define the OpenStackControlPlane CR:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack-control-plane
      namespace: openstack
    Copy to Clipboard Toggle word wrap
  2. Specify the Secret CR you created to provide secure access to the RHOSO service pods in Providing secure access to the Red Hat OpenStack Services on OpenShift services:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack-control-plane
      namespace: openstack
    spec:
      secret: osp-secret
    Copy to Clipboard Toggle word wrap
  3. Specify the storageClass you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end:

    spec:
      secret: osp-secret
      storageClass: <RHOCP_storage_class>
    Copy to Clipboard Toggle word wrap
    • Replace <RHOCP_storage_class> with the storage class you created for your RHOCP cluster storage back end. For information about storage classes, see Creating a storage class.
  4. Add the following service configurations:

    Note
    • The following service examples use IP addresses from the default RHOSO MetalLB IPAddressPool range for the loadBalancerIPs field. Update the loadBalancerIPs field with the IP address from the MetalLB IPAddressPool range that you created.
    • You cannot override the default public service endpoint. The public service endpoints are exposed as RHOCP routes by default, because only routes are supported for public endpoints.
    • Block Storage service (cinder):

        cinder:
          apiOverride:
            route: {}
          template:
            databaseInstance: openstack
            secret: osp-secret
            cinderAPI:
              replicas: 3
              override:
                service:
                  internal:
                    metadata:
                      annotations:
                        metallb.universe.tf/address-pool: internalapi
                        metallb.universe.tf/allow-shared-ip: internalapi
                        metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                    spec:
                      type: LoadBalancer
            cinderScheduler:
              replicas: 1
            cinderBackup:
              networkAttachments:
              - storage
              replicas: 0
            cinderVolumes:
              volume1:
                networkAttachments:
                - storage
                replicas: 0
      Copy to Clipboard Toggle word wrap
      • cinderBackup.replicas: You can deploy the initial control plane without activating the cinderBackup service. To deploy the service, you must set the number of replicas for the service and configure the back end for the service. For information about the recommended replicas for each service and how to configure a back end for the Block Storage service and the backup service, see Configuring the Block Storage backup service in Configuring persistent storage.
      • cinderVolumes.replicas: You can deploy the initial control plane without activating the cinderVolumes service. To deploy the service, you must set the number of replicas for the service and configure the back end for the service. For information about the recommended replicas for the cinderVolumes service and how to configure a back end for the service, see Configuring the volume service in Configuring persistent storage.
    • Compute service (nova):

        nova:
          apiOverride:
            route: {}
          template:
            apiServiceTemplate:
              replicas: 3
              override:
                service:
                  internal:
                    metadata:
                      annotations:
                        metallb.universe.tf/address-pool: internalapi
                        metallb.universe.tf/allow-shared-ip: internalapi
                        metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                    spec:
                      type: LoadBalancer
            metadataServiceTemplate:
              replicas: 3
              override:
                service:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
            schedulerServiceTemplate:
              replicas: 3
            cellTemplates:
              cell0:
                cellDatabaseAccount: nova-cell0
                cellDatabaseInstance: openstack
                cellMessageBusInstance: rabbitmq
                hasAPIAccess: true
              cell1:
                cellDatabaseAccount: nova-cell1
                cellDatabaseInstance: openstack-cell1
                cellMessageBusInstance: rabbitmq-cell1
                noVNCProxyServiceTemplate:
                  enabled: true
                  networkAttachments:
                  - ctlplane
                hasAPIAccess: true
            secret: osp-secret
      Copy to Clipboard Toggle word wrap
      Note

      A full set of Compute services (nova) are deployed by default for each of the default cells, cell0 and cell1: nova-api, nova-metadata, nova-scheduler, and nova-conductor. The novncproxy service is also enabled for cell1 by default.

    • DNS service for the data plane:

        dns:
          template:
            options:
            - key: server
              values:
              - 192.168.122.1
            - key: server
              values:
              - 192.168.122.2
            override:
              service:
                metadata:
                  annotations:
                    metallb.universe.tf/address-pool: ctlplane
                    metallb.universe.tf/allow-shared-ip: ctlplane
                    metallb.universe.tf/loadBalancerIPs: 192.168.122.80
                spec:
                  type: LoadBalancer
            replicas: 2
      Copy to Clipboard Toggle word wrap
      • dns.template.options: Defines the dnsmasq instances required for each DNS server by using key-value pairs. In this example, there are two key-value pairs defined because there are two DNS servers configured to forward requests to.
      • dns.template.options.key: Specifies the dnsmasq parameter to customize for the deployed dnsmasq instance. Set to one of the following valid values:

        • server
        • rev-server
        • srv-host
        • txt-record
        • ptr-record
        • rebind-domain-ok
        • naptr-record
        • cname
        • host-record
        • caa-record
        • dns-rr
        • auth-zone
        • synth-domain
        • no-negcache
        • local
      • dns.template.options.values: Specifies the values for the dnsmasq parameter. You can specify a generic DNS server as the value, for example, 1.1.1.1, or a DNS server for a specific domain, for example, /google.com/8.8.8.8.

        Note

        This DNS service, dnsmasq, provides DNS services for nodes on the RHOSO data plane. dnsmasq is different from the RHOSO DNS service (designate) that provides DNS as a service for cloud tenants.

    • Identity service (keystone)

        keystone:
          apiOverride:
            route: {}
          template:
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
            databaseInstance: openstack
            secret: osp-secret
            replicas: 3
      Copy to Clipboard Toggle word wrap
    • Image service (glance):

        glance:
          apiOverrides:
            default:
              route: {}
          template:
            databaseInstance: openstack
            storage:
              storageRequest: 10G
            secret: osp-secret
            keystoneEndpoint: default
            glanceAPIs:
              default:
                replicas: 0 # Configure back end; set to 3 when deploying service
                override:
                  service:
                    internal:
                      metadata:
                        annotations:
                          metallb.universe.tf/address-pool: internalapi
                          metallb.universe.tf/allow-shared-ip: internalapi
                          metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                      spec:
                        type: LoadBalancer
                networkAttachments:
                - storage
      Copy to Clipboard Toggle word wrap
      • glanceAPIs.default.replicas: You can deploy the initial control plane without activating the Image service (glance). To deploy the Image service, you must set the number of replicas for the service and configure the back end for the service. For information about the recommended replicas for the Image service and how to configure a back end for the service, see Configuring the Image service (glance) in Configuring persistent storage. If you do not deploy the Image service, you cannot upload images to the cloud or start an instance.
    • Key Management service (barbican):

        barbican:
          apiOverride:
            route: {}
          template:
            databaseInstance: openstack
            secret: osp-secret
            barbicanAPI:
              replicas: 3
              override:
                service:
                  internal:
                    metadata:
                      annotations:
                        metallb.universe.tf/address-pool: internalapi
                        metallb.universe.tf/allow-shared-ip: internalapi
                        metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                    spec:
                      type: LoadBalancer
            barbicanWorker:
              replicas: 3
            barbicanKeystoneListener:
              replicas: 1
      Copy to Clipboard Toggle word wrap
    • Networking service (neutron):

        neutron:
          apiOverride:
            route: {}
          template:
            replicas: 3
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
            databaseInstance: openstack
            secret: osp-secret
            networkAttachments:
            - internalapi
      Copy to Clipboard Toggle word wrap
    • Object Storage service (swift):

        swift:
          enabled: true
          proxyOverride:
            route: {}
          template:
            swiftProxy:
              networkAttachments:
              - storage
              override:
                service:
                  internal:
                    metadata:
                      annotations:
                        metallb.universe.tf/address-pool: internalapi
                        metallb.universe.tf/allow-shared-ip: internalapi
                        metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                    spec:
                      type: LoadBalancer
              replicas: 2
              secret: osp-secret
            swiftRing:
              ringReplicas: 3
            swiftStorage:
              networkAttachments:
              - storage
              replicas: 3
              storageRequest: 10Gi
      Copy to Clipboard Toggle word wrap
    • OVN:

        ovn:
          template:
            ovnDBCluster:
              ovndbcluster-nb:
                replicas: 3
                dbType: NB
                storageRequest: 10G
                networkAttachment: internalapi
              ovndbcluster-sb:
                replcas: 3
                dbType: SB
                storageRequest: 10G
                networkAttachment: internalapi
            ovnNorthd: {}
      Copy to Clipboard Toggle word wrap
    • Placement service (placement):

        placement:
          apiOverride:
            route: {}
          template:
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
            databaseInstance: openstack
            replicas: 3
            secret: osp-secret
      Copy to Clipboard Toggle word wrap
    • Telemetry service (ceilometer, prometheus):

        telemetry:
          enabled: true
          template:
            metricStorage:
              enabled: true
              dashboardsEnabled: true
              dataplaneNetwork: ctlplane
              networkAttachments:
                - ctlplane
              monitoringStack:
                alertingEnabled: true
                scrapeInterval: 30s
                storage:
                  strategy: persistent
                  retention: 24h
                  persistent:
                    pvcStorageRequest: 20G
            autoscaling:
              enabled: false
              aodh:
                databaseAccount: aodh
                databaseInstance: openstack
                passwordSelector:
                  aodhService: AodhPassword
                rabbitMqClusterName: rabbitmq
                serviceUser: aodh
                secret: osp-secret
              heatInstance: heat
            ceilometer:
              enabled: true
              secret: osp-secret
            logging:
              enabled: false
      Copy to Clipboard Toggle word wrap
      • telemetry.template.metricStorage.dataplaneNetwork: Defines the network that you use to scrape dataplane node_exporter endpoints.
      • telemetry.template.metricStorage.networkAttachments: Lists the networks that each service pod is attached to by using the NetworkAttachmentDefinition resource names. You configure a NIC for the service for each network attachment that you specify. If you do not configure the isolated networks that each service pod is attached to, then the default pod network is used. You must create a networkAttachment that matches the network that you specify as the dataplaneNetwork, so that Prometheus can scrape data from the dataplane nodes.
      • telemetry.template.autoscaling: You must have the autoscaling field present, even if autoscaling is disabled. For more information about autoscaling, see Autoscaling for Instances.
  5. Add the following service configurations to implement high availability (HA):

    • A MariaDB Galera cluster for use by all RHOSO services (openstack), and a MariaDB Galera cluster for use by the Compute service for cell1 (openstack-cell1):

        galera:
          templates:
            openstack:
              storageRequest: 5000M
              secret: osp-secret
              replicas: 3
            openstack-cell1:
              storageRequest: 5000M
              secret: osp-secret
              replicas: 3
      Copy to Clipboard Toggle word wrap
    • A single memcached cluster that contains three memcached servers:

        memcached:
          templates:
            memcached:
               replicas: 3
      Copy to Clipboard Toggle word wrap
    • A RabbitMQ cluster for use by all RHOSO services (rabbitmq), and a RabbitMQ cluster for use by the Compute service for cell1 (rabbitmq-cell1):

        rabbitmq:
          templates:
            rabbitmq:
              replicas: 3
              override:
                service:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.85
                  spec:
                    type: LoadBalancer
            rabbitmq-cell1:
              replicas: 3
              override:
                service:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.86
                  spec:
                    type: LoadBalancer
      Copy to Clipboard Toggle word wrap
      Note

      You cannot configure multiple RabbitMQ instances on the same virtual IP (VIP) address because all RabbitMQ instances use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses.

  6. Create the control plane:

    $ oc create -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  7. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    NAME 						STATUS 	MESSAGE
    openstack-control-plane 	Unknown 	Setup started
    Copy to Clipboard Toggle word wrap

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

    Note

    Creating the control plane also creates an OpenStackClient pod that you can access through a remote shell (rsh) to run OpenStack CLI commands.

    $ oc rsh -n openstack openstackclient
    Copy to Clipboard Toggle word wrap
  8. Optional: Confirm that the control plane is deployed by reviewing the pods in the openstack namespace:

    $ oc get pods -n openstack
    Copy to Clipboard Toggle word wrap

    The control plane is deployed when all the pods are either completed or running.

Verification

  1. Open a remote shell connection to the OpenStackClient pod:

    $ oc rsh -n openstack openstackclient
    Copy to Clipboard Toggle word wrap
  2. Confirm that the internal service endpoints are registered with each service:

    $ openstack endpoint list -c 'Service Name' -c Interface -c URL --service glance
    +--------------+-----------+---------------------------------------------------------------+
    | Service Name | Interface | URL                                                           |
    +--------------+-----------+---------------------------------------------------------------+
    | glance       | internal  | https://glance-internal.openstack.svc                     |
    | glance       | public    | https://glance-default-public-openstack.apps.ostest.test.metalkube.org |
    +--------------+-----------+---------------------------------------------------------------+
    Copy to Clipboard Toggle word wrap
  3. Exit the OpenStackClient pod:

    $ exit
    Copy to Clipboard Toggle word wrap

4.3. Example OpenStackControlPlane CR

The following example OpenStackControlPlane CR is a complete control plane configuration that includes all the key services that must always be enabled for a successful deployment.

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack-control-plane
  namespace: openstack
spec:
  secret: osp-secret
  storageClass: your-RHOCP-storage-class
  cinder:
    apiOverride:
      route: {}
    template:
      databaseInstance: openstack
      secret: osp-secret
      cinderAPI:
        replicas: 3
        override:
          service:
            internal:
              metadata:
                annotations:
                  metallb.universe.tf/address-pool: internalapi
                  metallb.universe.tf/allow-shared-ip: internalapi
                  metallb.universe.tf/loadBalancerIPs: 172.17.0.80
              spec:
                type: LoadBalancer
      cinderScheduler:
        replicas: 1
      cinderBackup:
        networkAttachments:
        - storage
        replicas: 0 # backend needs to be configured to activate the service
      cinderVolumes:
        volume1:
          networkAttachments:
          - storage
          replicas: 0 # backend needs to be configured to activate the service
  nova:
    apiOverride:
      route: {}
    template:
      apiServiceTemplate:
        replicas: 3
        override:
          service:
            internal:
              metadata:
                annotations:
                  metallb.universe.tf/address-pool: internalapi
                  metallb.universe.tf/allow-shared-ip: internalapi
                  metallb.universe.tf/loadBalancerIPs: 172.17.0.80
              spec:
                type: LoadBalancer
      metadataServiceTemplate:
        replicas: 3
        override:
          service:
            metadata:
              annotations:
                metallb.universe.tf/address-pool: internalapi
                metallb.universe.tf/allow-shared-ip: internalapi
                metallb.universe.tf/loadBalancerIPs: 172.17.0.80
            spec:
              type: LoadBalancer
      schedulerServiceTemplate:
        replicas: 3
      cellTemplates:
        cell0:
          cellDatabaseAccount: nova-cell0
          cellDatabaseInstance: openstack
          cellMessageBusInstance: rabbitmq
          hasAPIAccess: true
        cell1:
          cellDatabaseAccount: nova-cell1
          cellDatabaseInstance: openstack-cell1
          cellMessageBusInstance: rabbitmq-cell1
          noVNCProxyServiceTemplate:
            enabled: true
            networkAttachments:
            - ctlplane
          hasAPIAccess: true
      secret: osp-secret
  dns:
    template:
      options:
      - key: server
        values:
        - 192.168.122.1
      - key: server
        values:
        - 192.168.122.2
      override:
        service:
          metadata:
            annotations:
              metallb.universe.tf/address-pool: ctlplane
              metallb.universe.tf/allow-shared-ip: ctlplane
              metallb.universe.tf/loadBalancerIPs: 192.168.122.80
          spec:
            type: LoadBalancer
      replicas: 2
  galera:
    templates:
      openstack:
        storageRequest: 5000M
        secret: osp-secret
        replicas: 3
      openstack-cell1:
        storageRequest: 5000M
        secret: osp-secret
        replicas: 3
  keystone:
    apiOverride:
      route: {}
    template:
      override:
        service:
          internal:
            metadata:
              annotations:
                metallb.universe.tf/address-pool: internalapi
                metallb.universe.tf/allow-shared-ip: internalapi
                metallb.universe.tf/loadBalancerIPs: 172.17.0.80
            spec:
              type: LoadBalancer
      databaseInstance: openstack
      secret: osp-secret
      replicas: 3
  glance:
    apiOverrides:
      default:
        route: {}
    template:
      databaseInstance: openstack
      storage:
        storageRequest: 10G
      secret: osp-secret
      keystoneEndpoint: default
      glanceAPIs:
        default:
          replicas: 0 # Configure back end; set to 3 when deploying service
          override:
            service:
              internal:
                metadata:
                  annotations:
                    metallb.universe.tf/address-pool: internalapi
                    metallb.universe.tf/allow-shared-ip: internalapi
                    metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                spec:
                  type: LoadBalancer
          networkAttachments:
          - storage
  barbican:
    apiOverride:
      route: {}
    template:
      databaseInstance: openstack
      secret: osp-secret
      barbicanAPI:
        replicas: 3
        override:
          service:
            internal:
              metadata:
                annotations:
                  metallb.universe.tf/address-pool: internalapi
                  metallb.universe.tf/allow-shared-ip: internalapi
                  metallb.universe.tf/loadBalancerIPs: 172.17.0.80
              spec:
                type: LoadBalancer
      barbicanWorker:
        replicas: 3
      barbicanKeystoneListener:
        replicas: 1
  memcached:
    templates:
      memcached:
         replicas: 3
  neutron:
    apiOverride:
      route: {}
    template:
      replicas: 3
      override:
        service:
          internal:
            metadata:
              annotations:
                metallb.universe.tf/address-pool: internalapi
                metallb.universe.tf/allow-shared-ip: internalapi
                metallb.universe.tf/loadBalancerIPs: 172.17.0.80
            spec:
              type: LoadBalancer
      databaseInstance: openstack
      secret: osp-secret
      networkAttachments:
      - internalapi
  swift:
    enabled: true
    proxyOverride:
      route: {}
    template:
      swiftProxy:
        networkAttachments:
        - storage
        override:
          service:
            internal:
              metadata:
                annotations:
                  metallb.universe.tf/address-pool: internalapi
                  metallb.universe.tf/allow-shared-ip: internalapi
                  metallb.universe.tf/loadBalancerIPs: 172.17.0.80
              spec:
                type: LoadBalancer
        replicas: 1
      swiftRing:
        ringReplicas: 1
      swiftStorage:
        networkAttachments:
        - storage
        replicas: 1
        storageRequest: 10Gi
  ovn:
    template:
      ovnDBCluster:
        ovndbcluster-nb:
          replicas: 3
          dbType: NB
          storageRequest: 10G
          networkAttachment: internalapi
        ovndbcluster-sb:
          replicas: 3
          dbType: SB
          storageRequest: 10G
          networkAttachment: internalapi
      ovnNorthd: {}
  placement:
    apiOverride:
      route: {}
    template:
      override:
        service:
          internal:
            metadata:
              annotations:
                metallb.universe.tf/address-pool: internalapi
                metallb.universe.tf/allow-shared-ip: internalapi
                metallb.universe.tf/loadBalancerIPs: 172.17.0.80
            spec:
              type: LoadBalancer
      databaseInstance: openstack
      replicas: 3
      secret: osp-secret
  rabbitmq:
    templates:
      rabbitmq:
        replicas: 3
        override:
          service:
            metadata:
              annotations:
                metallb.universe.tf/address-pool: internalapi
                metallb.universe.tf/loadBalancerIPs: 172.17.0.85
            spec:
              type: LoadBalancer
      rabbitmq-cell1:
        replicas: 3
        override:
          service:
            metadata:
              annotations:
                metallb.universe.tf/address-pool: internalapi
                metallb.universe.tf/loadBalancerIPs: 172.17.0.86
            spec:
              type: LoadBalancer
  telemetry:
    enabled: true
    template:
      metricStorage:
        enabled: true
        dashboardsEnabled: true
        dataplaneNetwork: ctlplane
        networkAttachments:
          - ctlplane
        monitoringStack:
          alertingEnabled: true
          scrapeInterval: 30s
          storage:
            strategy: persistent
            retention: 24h
            persistent:
              pvcStorageRequest: 20G
      autoscaling:
        enabled: false
        aodh:
          databaseAccount: aodh
          databaseInstance: openstack
          passwordSelector:
            aodhService: AodhPassword
          rabbitMqClusterName: rabbitmq
          serviceUser: aodh
          secret: osp-secret
        heatInstance: heat
      ceilometer:
        enabled: true
        secret: osp-secret
      logging:
        enabled: false
Copy to Clipboard Toggle word wrap
  • spec.storageClass: The storage class that you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end.
  • spec.cinder: Service-specific parameters for the Block Storage service (cinder).
  • spec.cinder.template.cinderBackup: The Block Storage service back end. For more information on configuring storage services, see the Configuring persistent storage guide.
  • spec.cinder.template.cinderVolumes: The Block Storage service configuration. For more information on configuring storage services, see the Configuring persistent storage guide.
  • spec.cinder.template.cinderVolumes.networkAttachments: The list of networks that each service pod is directly attached to, specified by using the NetworkAttachmentDefinition resource names. A NIC is configured for the service for each specified network attachment.

    Note

    If you do not configure the isolated networks that each service pod is attached to, then the default pod network is used. For example, the Block Storage service uses the storage network to connect to a storage back end; the Identity service (keystone) uses an LDAP or Active Directory (AD) network; the ovnDBCluster service uses the internalapi network; and the ovnController service uses the tenant network.

  • spec.nova: Service-specific parameters for the Compute service (nova).
  • spec.nova.apiOverride: Service API route definition. You can customize the service route by using route-specific annotations. For more information, see Route-specific annotations in the RHOCP Networking guide. Set route: to {} to apply the default route template.
  • metallb.universe.tf/address-pool: The internal service API endpoint registered as a MetalLB service with the IPAddressPool internalapi.
  • metallb.universe.tf/loadBalancerIPs: The virtual IP (VIP) address for the service. The IP is shared with other services by default.
  • spec.rabbitmq: The RabbitMQ instances exposed to an isolated network with distinct IP addresses defined in the loadBalancerIPs annotation, as indicated in 11 and 12.

    Note

    You cannot configure multiple RabbitMQ instances on the same virtual IP (VIP) address because all RabbitMQ instances use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses.

  • rabbitmq.override.service.metadata.annotations.metallb.universe.tf/loadBalancerIPs: The distinct IP address for a RabbitMQ instance that is exposed to an isolated network.

4.4. Removing a service from the control plane

You can completely remove a service and the service database from the control plane after deployment by disabling the service. Many services are enabled by default, which means that the OpenStack Operator creates resources such as the service database and Identity service (keystone) users, even if no service pod is created because replicas is set to 0.

Warning

Remove a service with caution. Removing a service is not the same as stopping service pods. Removing a service is irreversible. Disabling a service removes the service database and any resources that referenced the service are no longer tracked. Create a backup of the service database before removing a service.

Procedure

  1. Open the OpenStackControlPlane CR file on your workstation.
  2. Locate the service you want to remove from the control plane and disable it:

      cinder:
        enabled: false
        apiOverride:
          route: {}
          ...
    Copy to Clipboard Toggle word wrap
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  4. Wait until RHOCP removes the resource related to the disabled service. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    NAME 							STATUS 	MESSAGE
    openstack-control-plane 	Unknown 	Setup started
    Copy to Clipboard Toggle word wrap

    The OpenStackControlPlane resource is updated with the disabled service when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

  5. Optional: Confirm that the pods from the disabled service are no longer listed by reviewing the pods in the openstack namespace:

    $ oc get pods -n openstack
    Copy to Clipboard Toggle word wrap
  6. Check that the service is removed:

    $ oc get cinder -n openstack
    Copy to Clipboard Toggle word wrap

    This command returns the following message when the service is successfully removed:

    No resources found in openstack namespace.
    Copy to Clipboard Toggle word wrap
  7. Check that the API endpoints for the service are removed from the Identity service (keystone):

    $ oc rsh -n openstack openstackclient
    $ openstack endpoint list --service volumev3
    Copy to Clipboard Toggle word wrap

    This command returns the following message when the API endpoints for the service are successfully removed:

    No service with a type, name or ID of 'volumev3' exists.
    Copy to Clipboard Toggle word wrap

4.5. Additional resources

Chapter 5. Creating the data plane

The Red Hat OpenStack Services on OpenShift (RHOSO) data plane consists of RHEL 9.4 nodes. Use the OpenStackDataPlaneNodeSet custom resource definition (CRD) to create the custom resources (CRs) that define the nodes and the layout of the data plane. An OpenStackDataPlaneNodeSet CR is a logical grouping of nodes of a similar type. A data plane typically consists of multiple OpenStackDataPlaneNodeSet CRs to define groups of nodes with different configurations and roles.

You can use pre-provisioned or unprovisioned nodes in an OpenStackDataPlaneNodeSet CR:

  • Pre-provisioned node: You have used your own tooling to install the operating system on the node before adding it to the data plane.
  • Unprovisioned node: The node does not have an operating system installed before you add it to the data plane. The node is provisioned by using the Cluster Baremetal Operator (CBO) as part of the data plane creation and deployment process.
Note

You cannot include both pre-provisioned and unprovisioned nodes in the same OpenStackDataPlaneNodeSet CR.

To create and deploy a data plane, you must perform the following tasks:

  1. Create a Secret CR for each node set for Ansible to use to execute commands on the data plane nodes.
  2. Create the OpenStackDataPlaneNodeSet CRs that define the nodes and layout of the data plane.
  3. Create the OpenStackDataPlaneDeployment CR that triggers the Ansible execution that deploys and configures the software for the specified list of OpenStackDataPlaneNodeSet CRs.

The following procedures create two simple node sets, one with pre-provisioned nodes, and one with bare-metal nodes that must be provisioned during the node set deployment. The procedures aim to get you up and running quickly with a data plane environment that you can use to troubleshoot issues and test the environment before adding all the customizations you require. You can add additional node sets to a deployed environment, and you can customize your deployed environment by updating the common configuration in the default ConfigMap CR for the service, and by creating custom services. For more information about how to customize your data plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.

Note

Red Hat OpenStack Services on OpenShift (RHOSO) supports external deployments of Red Hat Ceph Storage 7 and 8. Configuration examples that reference Red Hat Ceph Storage use Release 7 information. If you are using Red Hat Ceph Storage 8, adjust the configuration examples accordingly.

5.1. Prerequisites

  • A functional control plane, created with the OpenStack Operator. For more information, see Creating the control plane.
  • You are logged on to a workstation that has access to the Red Hat OpenShift Container Platform (RHOCP) cluster as a user with cluster-admin privileges.

5.2. Creating the data plane secrets

You must create the Secret custom resources (CRs) that the data plane requires to be able to operate. The Secret CRs are used by the data plane nodes to secure access between nodes, to register the node operating systems with the Red Hat Customer Portal, to enable node repositories, and to provide Compute nodes with access to libvirt.

To enable secure access between nodes, you must generate two SSH keys and create an SSH key Secret CR for each key:

  • An SSH key to enable Ansible to manage the RHEL nodes on the data plane. Ansible executes commands with this user and key. You can create an SSH key for each OpenStackDataPlaneNodeSet CR in your data plane.

    • An SSH key to enable migration of instances between Compute nodes.

Prerequisites

  • Pre-provisioned nodes are configured with an SSH public key in the $HOME/.ssh/authorized_keys file for a user with passwordless sudo privileges. For more information, see Managing sudo access in the RHEL Configuring basic system settings guide.

Procedure

  1. For unprovisioned nodes, create the SSH key pair for Ansible:

    $ ssh-keygen -f <key_file_name> -N "" -t rsa -b 4096
    Copy to Clipboard Toggle word wrap
    • Replace <key_file_name> with the name to use for the key pair.
  2. Create the Secret CR for Ansible and apply it to the cluster:

    $ oc create secret generic dataplane-ansible-ssh-private-key-secret \
    --save-config \
    --dry-run=client \
    --from-file=ssh-privatekey=<key_file_name> \
    --from-file=ssh-publickey=<key_file_name>.pub \
    [--from-file=authorized_keys=<key_file_name>.pub] -n openstack \
    -o yaml | oc apply -f -
    Copy to Clipboard Toggle word wrap
    • Replace <key_file_name> with the name and location of your SSH key pair file.
    • Optional: Only include the --from-file=authorized_keys option for bare-metal nodes that must be provisioned when creating the data plane.
  3. If you are creating Compute nodes, create a secret for migration.

    1. Create the SSH key pair for instance migration:

      $ ssh-keygen -f ./nova-migration-ssh-key -t ecdsa-sha2-nistp521 -N ''
      Copy to Clipboard Toggle word wrap
    2. Create the Secret CR for migration and apply it to the cluster:

      $ oc create secret generic nova-migration-ssh-key \
      --save-config \
      --from-file=ssh-privatekey=nova-migration-ssh-key \
      --from-file=ssh-publickey=nova-migration-ssh-key.pub \
      -n openstack \
      -o yaml | oc apply -f -
      Copy to Clipboard Toggle word wrap
  4. For nodes that have not been registered to the Red Hat Customer Portal, create the Secret CR for subscription-manager credentials to register the nodes:

    $ oc create secret generic subscription-manager \
    --from-literal rhc_auth='{"login": {"username": "<subscription_manager_username>", "password": "<subscription_manager_password>"}}'
    Copy to Clipboard Toggle word wrap
    • Replace <subscription_manager_username> with the username you set for subscription-manager.
    • Replace <subscription_manager_password> with the password you set for subscription-manager.
  5. Create a Secret CR that contains the Red Hat registry credentials:

    $ oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"<username>": "<password>"}}'
    Copy to Clipboard Toggle word wrap
    • Replace <username> and <password> with your Red Hat registry username and password credentials.

      For information about how to create your registry service account, see the Knowledge Base article Creating Registry Service Accounts.

  6. If you are creating Compute nodes, create a secret for libvirt.

    1. Create a file on your workstation named secret_libvirt.yaml to define the libvirt secret:

      apiVersion: v1
      kind: Secret
      metadata:
       name: libvirt-secret
       namespace: openstack
      type: Opaque
      data:
       LibvirtPassword: <base64_password>
      Copy to Clipboard Toggle word wrap
      • Replace <base64_password> with a base64-encoded string with maximum length 63 characters. You can use the following command to generate a base64-encoded password:

        $ echo -n <password> | base64
        Copy to Clipboard Toggle word wrap
        Tip

        If you do not want to base64-encode the username and password, you can use the stringData field instead of the data field to set the username and password.

    2. Create the Secret CR:

      $ oc apply -f secret_libvirt.yaml -n openstack
      Copy to Clipboard Toggle word wrap
  7. Verify that the Secret CRs are created:

    $ oc describe secret dataplane-ansible-ssh-private-key-secret
    $ oc describe secret nova-migration-ssh-key
    $ oc describe secret subscription-manager
    $ oc describe secret redhat-registry
    $ oc describe secret libvirt-secret
    Copy to Clipboard Toggle word wrap

You can define an OpenStackDataPlaneNodeSet custom resource (CR) for each logical grouping of pre-provisioned nodes in your data plane, for example, nodes grouped by hardware, location, or networking. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR.

Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1. If you customize your control plane to include additional Compute cells, you must specify the cell to which the node set is connected. For more information on adding Compute cells, see Connecting an OpenStackDataPlaneNodeSet CR to a Compute cell in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.

You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.

Tip

For an example OpenStackDataPlaneNodeSet CR that a node set from pre-provisioned Compute nodes, see Example OpenStackDataPlaneNodeSet CR for pre-provisioned nodes.

Procedure

  1. Create a file on your workstation named openstack_preprovisioned_node_set.yaml to define the OpenStackDataPlaneNodeSet CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: openstack-data-plane
      namespace: openstack
    spec:
      env: 
    1
    
        - name: ANSIBLE_FORCE_COLOR
          value: "True"
    Copy to Clipboard Toggle word wrap
    • metadata.name: The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. Update the name in this example to a name that reflects the nodes in the set.
    • spec.env: An optional list of environment variables to pass to the pod.
  2. Connect the data plane to the control plane network:

    spec:
      ...
      networkAttachments:
        - ctlplane
    Copy to Clipboard Toggle word wrap
  3. Specify that the nodes in this set are pre-provisioned:

      preProvisioned: true
    Copy to Clipboard Toggle word wrap
  4. Add the SSH key secret that you created to enable Ansible to connect to the data plane nodes:

      nodeTemplate:
        ansibleSSHPrivateKeySecret: <secret-key>
    Copy to Clipboard Toggle word wrap
    • Replace <secret-key> with the name of the SSH key Secret CR you created for this node set in Creating the data plane secrets, for example, dataplane-ansible-ssh-private-key-secret.
  5. Create a Persistent Volume Claim (PVC) in the openstack namespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set the volumeMode to Filesystem and accessModes to ReadWriteOnce. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and the ansible-runner creates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment.
  6. Enable persistent logging for the data plane nodes:

      nodeTemplate:
        ...
        extraMounts:
          - extraVolType: Logs
            volumes:
            - name: ansible-logs
              persistentVolumeClaim:
                claimName: <pvc_name>
            mounts:
            - name: ansible-logs
              mountPath: "/runner/artifacts"
    Copy to Clipboard Toggle word wrap
    • Replace <pvc_name> with the name of the PVC storage on your RHOCP cluster.
  7. Specify the management network:

      nodeTemplate:
        ...
        managementNetwork: ctlplane
    Copy to Clipboard Toggle word wrap
  8. Specify the Secret CRs used to source the usernames and passwords to register the operating system of your nodes and to enable repositories. The following example demonstrates how to register your nodes to Red Hat Content Delivery Network (CDN). For information about how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.

      nodeTemplate:
        ...
        ansible:
          ansibleUser: cloud-admin
          ansiblePort: 22
          ansibleVarsFrom:
            - secretRef:
                name: subscription-manager
            - secretRef:
                name: redhat-registry
          ansibleVars:
            rhc_release: 9.4
            rhc_repositories:
                - {name: "*", state: disabled}
                - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
                - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
                - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
                - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
                - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
                - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
            edpm_bootstrap_release_version_package: []
    Copy to Clipboard Toggle word wrap
  9. Add the network configuration template to apply to your data plane nodes. The following example applies the single NIC VLANs network configuration to the data plane nodes:

      nodeTemplate:
        ...
        ansible:
          ...
          ansibleVars:
            ...
            edpm_network_config_os_net_config_mappings:
              edpm-compute-0:
                nic1: 52:54:04:60:55:22
            neutron_physical_bridge_name: br-ex
            neutron_public_interface_name: eth0
            edpm_network_config_nmstate: true
            edpm_network_config_update: false
            edpm_network_config_template: |
              ---
              {% set mtu_list = [ctlplane_mtu] %}
              {% for network in nodeset_networks %}
              {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
              {%- endfor %}
              {% set min_viable_mtu = mtu_list | max %}
              network_config:
              - type: ovs_bridge
                name: {{ neutron_physical_bridge_name }}
                mtu: {{ min_viable_mtu }}
                use_dhcp: false
                dns_servers: {{ ctlplane_dns_nameservers }}
                domain: {{ dns_search_domains }}
                addresses:
                - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
                routes: {{ ctlplane_host_routes }}
                members:
                - type: interface
                  name: nic1
                  mtu: {{ min_viable_mtu }}
                  # force the MAC address of the bridge to this interface
                  primary: true
              {% for network in nodeset_networks %}
                - type: vlan
                  mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
                  vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
                  addresses:
                  - ip_netmask:
                      {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
                  routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
              {% endfor %}
    Copy to Clipboard Toggle word wrap
    • nic1: The MAC address assigned to the NIC to use for network configuration on the Compute node.
    • edpm_network_config_nmstate: Sets the os-net-config provider to nmstate. The default value is true. Change it to false only if a specific limitation of the nmstate provider requires you to use the ifcfg provider now. In a future release after nmstate limitations are resolved, the ifcfg provider will be deprecated and removed. In this RHOSO release, adoption of a RHOSP 17.1 deployment with the nmstate provider is not supported. For this and other limitations of RHOSO nmstate support, see https://issues.redhat.com/browse/OSPRH-11309.
    • edpm_network_config_update: When deploying a node set for the first time, set the edpm_network_config_update variable to false. When updating or adopting a node set, set edpm_network_config_update to true.

      Important

      After an update or an adoption, you must reset edpm_network_config_update to false. Otherwise, the nodes could lose network access. Whenever edpm_network_config_update is true, the updated network configuration is reapplied every time an OpenStackDataPlaneDeployment CR is created that includes the configure-network service that is a member of the servicesOverride list.

    • dns_servers: Autogenerated from IPAM and DNS and no user input is required.
    • domain: Autogenerated from IPAM and DNS and no user input is required.
    • routes: Autogenerated from IPAM and DNS and no user input is required.
  10. Add the common configuration for the set of nodes in this group under the nodeTemplate section. Each node in this OpenStackDataPlaneNodeSet inherits this configuration. For information about the properties you can use to configure common node attributes, see OpenStackDataPlaneNodeSet CR spec properties.
  11. Define each node in this node set:

      nodes:
        edpm-compute-0:
          hostName: edpm-compute-0
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.100
          - name: internalapi
            subnetName: subnet1
            fixedIP: 172.17.0.100
          - name: storage
            subnetName: subnet1
            fixedIP: 172.18.0.100
          - name: tenant
            subnetName: subnet1
            fixedIP: 172.19.0.100
          ansible:
            ansibleHost: 192.168.122.100
            ansibleUser: cloud-admin
            ansibleVars:
              fqdn_internal_api: edpm-compute-0.example.com
        edpm-compute-1:
          hostName: edpm-compute-1
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.101
          - name: internalapi
            subnetName: subnet1
            fixedIP: 172.17.0.101
          - name: storage
            subnetName: subnet1
            fixedIP: 172.18.0.101
          - name: tenant
            subnetName: subnet1
            fixedIP: 172.19.0.101
          ansible:
            ansibleHost: 192.168.122.101
            ansibleUser: cloud-admin
            ansibleVars:
              fqdn_internal_api: edpm-compute-1.example.com
    Copy to Clipboard Toggle word wrap
    • edpm-compute-0: The node definition reference, for example, edpm-compute-0. Each node in the node set must have a node definition.
    • networks: Defines the IPAM and the DNS records for the node.
    • fixedIP: Specifies a predictable IP address for the network that must be in the allocation range defined for the network in the NetConfig CR.
    • networkData: The Secret that contains network configuration that is node-specific.
    • userData.name: The Secret that contains user data that is node-specific.
    • ansibleHost: Overrides the hostname or IP address that Ansible uses to connect to the node. The default value is the value set for the hostName field for the node or the node definition reference, for example, edpm-compute-0.
    • ansibleVars: Node-specific Ansible variables that customize the node.

      Note
      • Nodes defined within the nodes section can configure the same Ansible variables that are configured in the nodeTemplate section. Where an Ansible variable is configured for both a specific node and within the nodeTemplate section, the node-specific values override those from the nodeTemplate section.
      • You do not need to replicate all the nodeTemplate Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node.
      • Many ansibleVars include edpm in the name, which stands for "External Data Plane Management".

        For information about the properties you can use to configure node attributes, see OpenStackDataPlaneNodeSet CR properties.

  12. Save the openstack_preprovisioned_node_set.yaml definition file.
  13. Create the data plane resources:

    $ oc create --save-config -f openstack_preprovisioned_node_set.yaml -n openstack
    Copy to Clipboard Toggle word wrap

Verification

  1. Verify that the data plane resources have been created by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m
    Copy to Clipboard Toggle word wrap

    When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error.

    For information about the data plane conditions and states, see Data plane conditions and states.

  2. Verify that the Secret resource was created for the node set:

    $ oc get secret | grep openstack-data-plane
    dataplanenodeset-openstack-data-plane Opaque 1 3m50s
    Copy to Clipboard Toggle word wrap
  3. Verify the services were created:

    $ oc get openstackdataplaneservice -n openstack
    NAME                AGE
    bootstrap           46m
    ceph-client         46m
    ceph-hci-pre        46m
    configure-network   46m
    configure-os        46m
    ...
    Copy to Clipboard Toggle word wrap

5.3.1. Example OpenStackDataPlaneNodeSet CR for pre-provisioned nodes

The following example OpenStackDataPlaneNodeSet CR creates a node set from pre-provisioned Compute nodes with some node-specific configuration. The example includes optional fields. Review the example and update the optional fields to the correct values for your environment or remove them before using the example in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.

Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.

Note

The following variables are autogenerated from IPAM and DNS and are not provided by the user:

  • ctlplane_dns_nameservers
  • dns_search_domains
  • ctlplane_host_routes
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
  name: openstack-data-plane
  namespace: openstack
spec:
  env:
    - name: ANSIBLE_FORCE_COLOR
      value: "True"
  networkAttachments:
    - ctlplane
  preProvisioned: true
  nodeTemplate:
    ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret
    extraMounts:
      - extraVolType: Logs
        volumes:
        - name: ansible-logs
          persistentVolumeClaim:
            claimName: <pvc_name>
        mounts:
        - name: ansible-logs
          mountPath: "/runner/artifacts"
    managementNetwork: ctlplane
    ansible:
      ansibleUser: cloud-admin
      ansiblePort: 22
      ansibleVarsFrom:
        - secretRef:
            name: subscription-manager
        - secretRef:
            name: redhat-registry
      ansibleVars:
        rhc_release: 9.4
        rhc_repositories:
            - {name: "*", state: disabled}
            - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
            - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
            - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
            - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
            - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
            - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
        edpm_bootstrap_release_version_package: []
        edpm_network_config_os_net_config_mappings:
          edpm-compute-0:
            nic1: 52:54:04:60:55:22
        neutron_physical_bridge_name: br-ex
        neutron_public_interface_name: eth0
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: ovs_bridge
            name: {{ neutron_physical_bridge_name }}
            mtu: {{ min_viable_mtu }}
            use_dhcp: false
            dns_servers: {{ ctlplane_dns_nameservers }}
            domain: {{ dns_search_domains }}
            addresses:
            - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
            routes: {{ ctlplane_host_routes }}
            members:
            - type: interface
              name: nic1
              mtu: {{ min_viable_mtu }}
              # force the MAC address of the bridge to this interface
              primary: true
          {% for network in nodeset_networks %}
            - type: vlan
              mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
              vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
              addresses:
              - ip_netmask:
                  {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
              routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
          {% endfor %}
  nodes:
    edpm-compute-0:
      hostName: edpm-compute-0
      networks:
      - name: ctlplane
        subnetName: subnet1
        defaultRoute: true
        fixedIP: 192.168.122.100
      - name: internalapi
        subnetName: subnet1
        fixedIP: 172.17.0.100
      - name: storage
        subnetName: subnet1
        fixedIP: 172.18.0.100
      - name: tenant
        subnetName: subnet1
        fixedIP: 172.19.0.100
      ansible:
        ansibleHost: 192.168.122.100
        ansibleUser: cloud-admin
        ansibleVars:
          fqdn_internal_api: edpm-compute-0.example.com
    edpm-compute-1:
      hostName: edpm-compute-1
      networks:
      - name: ctlplane
        subnetName: subnet1
        defaultRoute: true
        fixedIP: 192.168.122.101
      - name: internalapi
        subnetName: subnet1
        fixedIP: 172.17.0.101
      - name: storage
        subnetName: subnet1
        fixedIP: 172.18.0.101
      - name: tenant
        subnetName: subnet1
        fixedIP: 172.19.0.101
      ansible:
        ansibleHost: 192.168.122.101
        ansibleUser: cloud-admin
        ansibleVars:
          fqdn_internal_api: edpm-compute-1.example.com
Copy to Clipboard Toggle word wrap

5.4. Creating a data plane with unprovisioned nodes

To create a data plane with unprovisioned nodes, you must perform the following tasks:

  1. Create a BareMetalHost custom resource (CR) for each bare-metal data plane node.
  2. Define an OpenStackDataPlaneNodeSet CR for each logical grouping of unprovisioned nodes in your data plane, for example, nodes grouped by hardware, location, or networking.

5.4.1. Prerequisites

5.4.2. Creating the BareMetalHost CRs for unprovisioned nodes

You must create a BareMetalHost custom resource (CR) for each bare-metal data plane node. At a minimum, you must provide the data required to add the bare-metal data plane node on the network so that the remaining installation steps can access the node and perform the configuration.

Note

If you use the ctlplane interface for provisioning and you have rp_filter configured on the kernel to enable Reverse Path Forwarding (RPF), then the reverse path filtering logic drops traffic. For information about how to prevent traffic being dropped because of the RPF filter, see How to prevent asymmetric routing.

Procedure

  1. The Bare Metal Operator (BMO) manages BareMetalHost custom resources (CRs) in the openshift-machine-api namespace by default. Update the Provisioning CR to watch all namespaces:

    $ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'
    Copy to Clipboard Toggle word wrap
  2. If you are using virtual media boot for bare-metal data plane nodes and the nodes are not connected to a provisioning network, you must update the Provisioning CR to enable virtualMediaViaExternalNetwork, which enables bare-metal connectivity through the external network:

    $ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"virtualMediaViaExternalNetwork": true }}'
    Copy to Clipboard Toggle word wrap
  3. Create a file on your workstation that defines the Secret CR with the credentials for accessing the Baseboard Management Controller (BMC) of each bare-metal data plane node in the node set:

    apiVersion: v1
    kind: Secret
    metadata:
      name: edpm-compute-0-bmc-secret
      namespace: openstack
    type: Opaque
    data:
      username: <base64_username>
      password: <base64_password>
    Copy to Clipboard Toggle word wrap
    • Replace <base64_username> and <base64_password> with strings that are base64-encoded. You can use the following command to generate a base64-encoded string:

      $ echo -n <string> | base64
      Copy to Clipboard Toggle word wrap
      Tip

      If you do not want to base64-encode the username and password, you can use the stringData field instead of the data field to set the username and password.

  4. Create a file named bmh_nodes.yaml on your workstation, that defines the BareMetalHost CR for each bare-metal data plane node. The following example creates a BareMetalHost CR with the provisioning method Redfish virtual media:

    apiVersion: metal3.io/v1alpha1
    kind: BareMetalHost
    metadata:
      name: edpm-compute-0
      namespace: openstack
      labels:
        app: openstack
        workload: compute
    spec:
      bmc:
        address: redfish-virtualmedia+http://192.168.111.1:8000/redfish/v1/Systems/e8efd888-f844-4fe0-9e2e-498f4ab7806d
        credentialsName: edpm-compute-0-bmc-secret
      bootMACAddress: 00:c7:e4:a7:e7:f3
      bootMode: UEFI
      online: false
     [preprovisioningNetworkDataName: <network_config_secret_name>]
    Copy to Clipboard Toggle word wrap
    • metadata.labels: Key-value pairs, such as app, workload, and nodeName, that provide varying levels of granularity for labelling nodes. You can use these labels when you create an OpenStackDataPlaneNodeSet CR to describe the configuration of bare-metal nodes to be provisioned or to define nodes in a node set.
    • spec.bmc.address: The URL for communicating with the BMC controller of the node. For information about BMC addressing for other provisioning methods, see BMC addressing in the RHOCP Installing on bare metal guide.
    • spec.bmc.credentialsName: The name of the Secret CR you created in the previous step for accessing the BMC of the node.
    • preprovisioningNetworkDataName: An optional field that specifies the name of the network configuration secret in the local namespace to pass to the pre-provisioning image. The network configuration must be in nmstate format.

    For more information about how to create a BareMetalHost CR, see About the BareMetalHost resource in the RHOCP Installing on bare metal guide.

  5. Create the BareMetalHost resources:

    $ oc create -f bmh_nodes.yaml
    Copy to Clipboard Toggle word wrap
  6. Verify that the BareMetalHost resources have been created and are in the Available state:

    $ oc wait --for=jsonpath='{.status.provisioning.state}'=available bmh/edpm-compute-baremetal-00 --timeout=<timeout_value>
    Copy to Clipboard Toggle word wrap
    • Replace <timeout_value> with a value in minutes you want the command to wait for completion of the task. For example, if you want the command to wait 60 minutes, you would use the value 60m. Use a value that is appropriate to the size of your deployment. Give large deployments more time to complete deployment tasks.

You can define an OpenStackDataPlaneNodeSet custom resource (CR) for each logical grouping of unprovisioned nodes in your data plane, for example, nodes grouped by hardware, location, or networking. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR.

Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1. If you customize your control plane to include additional Compute cells, you must specify the cell to which the node set is connected. For more information on adding Compute cells, see Connecting an OpenStackDataPlaneNodeSet CR to a Compute cell in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.

You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.

Note

To set a root password for the data plane nodes during provisioning, use the passwordSecret field in the OpenStackDataPlaneNodeSet CR. For more information, see How to set a root password for the Dataplane Node on Red Hat OpenStack Services on OpenShift.

Tip

For an example OpenStackDataPlaneNodeSet CR that creates a node set from unprovisioned Compute nodes, see Example OpenStackDataPlaneNodeSet CR for unprovisioned nodes.

Prerequisites

Procedure

  1. Create a file on your workstation named openstack_unprovisioned_node_set.yaml to define the OpenStackDataPlaneNodeSet CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: openstack-data-plane
      namespace: openstack
    spec:
      tlsEnabled: true
      env:
        - name: ANSIBLE_FORCE_COLOR
          value: "True"
    Copy to Clipboard Toggle word wrap
    • metadata.name: The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. Update the name in this example to a name that reflects the nodes in the set.
    • spec.env: An optional list of environment variables to pass to the pod.
  2. Connect the data plane to the control plane network:

    spec:
      ...
      networkAttachments:
        - ctlplane
    Copy to Clipboard Toggle word wrap
  3. Specify that the nodes in this set are unprovisioned and must be provisioned when creating the resource:

      preProvisioned: false
    Copy to Clipboard Toggle word wrap
  4. Define the baremetalSetTemplate field to describe the configuration of the bare-metal nodes that must be provisioned when creating the resource:

      baremetalSetTemplate:
        deploymentSSHSecret: dataplane-ansible-ssh-private-key-secret
        bmhNamespace: <bmh_namespace>
        cloudUserName: <ansible_ssh_user>
        bmhLabelSelector:
          app: <bmh_label>
        ctlplaneInterface: <interface>
    Copy to Clipboard Toggle word wrap
    • Replace <bmh_namespace> with the namespace defined in the corresponding BareMetalHost CR for the node, for example, openstack.
    • Replace <ansible_ssh_user> with the username of the Ansible SSH user, for example, cloud-admin.
    • Replace <bmh_label> with the metadata label defined in the corresponding BareMetalHost CR for the node, for example, openstack. Metadata labels, such as app, workload, and nodeName are key-value pairs for labelling nodes. Set the bmhLabelSelector field to select data plane nodes based on one or more labels that match the labels in the corresponding BareMetalHost CR.
    • Replace <interface> with the control plane interface the node connects to, for example, enp6s0.
  5. If you created a custom OpenStackProvisionServer CR, add it to your baremetalSetTemplate definition:

      baremetalSetTemplate:
        ...
        provisionServerName: my-os-provision-server
    Copy to Clipboard Toggle word wrap
  6. Add the SSH key secret that you created to enable Ansible to connect to the data plane nodes:

      nodeTemplate:
        ansibleSSHPrivateKeySecret: <secret-key>
    Copy to Clipboard Toggle word wrap
    • Replace <secret-key> with the name of the SSH key Secret CR you created in Creating the data plane secrets, for example, dataplane-ansible-ssh-private-key-secret.
  7. Create a Persistent Volume Claim (PVC) in the openstack namespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set the volumeMode to Filesystem and accessModes to ReadWriteOnce. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and the ansible-runner creates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment.
  8. Enable persistent logging for the data plane nodes:

      nodeTemplate:
        ...
        extraMounts:
          - extraVolType: Logs
            volumes:
            - name: ansible-logs
              persistentVolumeClaim:
                claimName: <pvc_name>
            mounts:
            - name: ansible-logs
              mountPath: "/runner/artifacts"
    Copy to Clipboard Toggle word wrap
    • Replace <pvc_name> with the name of the PVC storage on your RHOCP cluster.
  9. Specify the management network:

      nodeTemplate:
        ...
        managementNetwork: ctlplane
    Copy to Clipboard Toggle word wrap
  10. Specify the Secret CRs used to source the usernames and passwords to register the operating system of your nodes and to enable repositories. The following example demonstrates how to register your nodes to Red Hat Content Delivery Network (CDN). For information about how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.

      nodeTemplate:
        ansible:
          ansibleUser: cloud-admin
          ansiblePort: 22
          ansibleVarsFrom:
            - secretRef:
                name: subscription-manager
            - secretRef:
                name: redhat-registry
          ansibleVars:
            rhc_release: 9.4
            rhc_repositories:
                - {name: "*", state: disabled}
                - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
                - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
                - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
                - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
                - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
                - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
            edpm_bootstrap_release_version_package: []
    Copy to Clipboard Toggle word wrap
  11. Add the network configuration template to apply to your data plane nodes. The following example applies the single NIC VLANs network configuration to the data plane nodes:

      nodeTemplate:
        ...
        ansible:
          ...
          ansibleVars:
            ...
            edpm_network_config_os_net_config_mappings:
              edpm-compute-0:
                nic1: 52:54:04:60:55:22
              edpm-compute-1:
                nic1: 52:54:04:60:55:22
            neutron_physical_bridge_name: br-ex
            neutron_public_interface_name: eth0
            edpm_network_config_nmstate: true
            edpm_network_config_update: false
            edpm_network_config_template: |
              ---
              {% set mtu_list = [ctlplane_mtu] %}
              {% for network in nodeset_networks %}
              {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
              {%- endfor %}
              {% set min_viable_mtu = mtu_list | max %}
              network_config:
              - type: ovs_bridge
                name: {{ neutron_physical_bridge_name }}
                mtu: {{ min_viable_mtu }}
                use_dhcp: false
                dns_servers: {{ ctlplane_dns_nameservers }}
                domain: {{ dns_search_domains }}
                addresses:
                - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
                routes: {{ ctlplane_host_routes }}
                members:
                - type: interface
                  name: nic1
                  mtu: {{ min_viable_mtu }}
                  # force the MAC address of the bridge to this interface
                  primary: true
              {% for network in nodeset_networks %}
                - type: vlan
                  mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
                  vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
                  addresses:
                  - ip_netmask:
                      {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
                  routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
              {% endfor %}
    Copy to Clipboard Toggle word wrap
    • nic1: The MAC address assigned to the NIC to use for network configuration on the Compute node.
    • edpm_network_config_nmstate: Sets the os-net-config provider to nmstate. The default value is true. Change it to false only if a specific limitation of the nmstate provider requires you to use the ifcfg provider now. In a future release after nmstate limitations are resolved, the ifcfg provider will be deprecated and removed. In this RHOSO release, adoption of a RHOSP 17.1 deployment with the nmstate provider is not supported. For this and other limitations of RHOSO nmstate support, see https://issues.redhat.com/browse/OSPRH-11309.
    • edpm_network_config_update: When deploying a node set for the first time, set the edpm_network_config_update variable to false. When updating or adopting a node set, set edpm_network_config_update to true.

      Important

      After an update or an adoption, you must reset edpm_network_config_update to false. Otherwise, the nodes could lose network access. Whenever edpm_network_config_update is true, the updated network configuration is reapplied every time an OpenStackDataPlaneDeployment CR is created that includes the configure-network service that is a member of the servicesOverride list.

    • dns_servers: Autogenerated from IPAM and DNS and no user input is required.
    • domain: Autogenerated from IPAM and DNS and no user input is required.
    • routes: Autogenerated from IPAM and DNS and no user input is required.
  12. Add the common configuration for the set of nodes in this group under the nodeTemplate section. Each node in this OpenStackDataPlaneNodeSet inherits this configuration. For information about the properties you can use to configure common node attributes, see OpenStackDataPlaneNodeSet CR properties.
  13. Define each node in this node set:

      nodes:
        edpm-compute-0:
          hostName: edpm-compute-0
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.100
          - name: internalapi
            subnetName: subnet1
          - name: storage
            subnetName: subnet1
          - name: tenant
            subnetName: subnet1
          networkData:
            name: edpm-compute-0-network-data
            namespace: openstack
          userData:
            name: edpm-compute-0-user-data
            namespace: openstack
          ansible:
            ansibleHost: 192.168.122.100
            ansibleUser: cloud-admin
            ansibleVars:
              fqdn_internal_api: edpm-compute-0.example.com
          bmhLabelSelector:
            nodeName: edpm-compute-0
        edpm-compute-1:
          hostName: edpm-compute-1
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.101
          - name: internalapi
            subnetName: subnet1
          - name: storage
            subnetName: subnet1
          - name: tenant
            subnetName: subnet1
          networkData:
            name: edpm-compute-1-network-data
            namespace: openstack
          userData:
            name: edpm-compute-1-user-data
            namespace: openstack
          ansible:
            ansibleHost: 192.168.122.101
            ansibleUser: cloud-admin
            ansibleVars:
              fqdn_internal_api: edpm-compute-1.example.com
          bmhLabelSelector:
            nodeName: edpm-compute-1
    Copy to Clipboard Toggle word wrap
    • edpm-compute-0: The node definition reference, for example, edpm-compute-0. Each node in the node set must have a node definition.
    • networks: Defines the IPAM and the DNS records for the node.
    • fixedIP: Specifies a predictable IP address for the network that must be in the allocation range defined for the network in the NetConfig CR.
    • networkData: The Secret that contains network configuration that is node-specific.
    • userData.name: The Secret that contains user data that is node-specific.
    • ansibleHost: Overrides the hostname or IP address that Ansible uses to connect to the node. The default value is the value set for the hostName field for the node or the node definition reference, for example, edpm-compute-0.
    • ansibleVars: Node-specific Ansible variables that customize the node.
    • bmhLabelSelector: Metadata labels, such as app, workload, and nodeName are key-value pairs for labelling nodes. Set the bmhLabelSelector field to select data plane nodes based on one or more labels that match the labels in the corresponding BareMetalHost CR.

      Note
      • Nodes defined within the nodes section can configure the same Ansible variables that are configured in the nodeTemplate section. Where an Ansible variable is configured for both a specific node and within the nodeTemplate section, the node-specific values override those from the nodeTemplate section.
      • You do not need to replicate all the nodeTemplate Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node.
      • Many ansibleVars include edpm in the name, which stands for "External Data Plane Management".

        For information about the properties you can use to configure node attributes, see OpenStackDataPlaneNodeSet CR properties.

  14. Save the openstack_unprovisioned_node_set.yaml definition file.
  15. Create the data plane resources:

    $ oc create --save-config -f openstack_unprovisioned_node_set.yaml -n openstack
    Copy to Clipboard Toggle word wrap

Verification

  1. Verify that the data plane resources have been created by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m
    Copy to Clipboard Toggle word wrap

    When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error.

    For information about the data plane conditions and states, see Data plane conditions and states.

  2. Verify that the Secret resource was created for the node set:

    $ oc get secret -n openstack | grep openstack-data-plane
    dataplanenodeset-openstack-data-plane Opaque 1 3m50s
    Copy to Clipboard Toggle word wrap
  3. Verify that the nodes have transitioned to the provisioned state:

    $ oc get bmh
    NAME            STATE         CONSUMER               ONLINE   ERROR   AGE
    edpm-compute-0  provisioned   openstack-data-plane   true             3d21h
    Copy to Clipboard Toggle word wrap
  4. Verify that the services were created:

    $ oc get openstackdataplaneservice -n openstack
    NAME                    AGE
    bootstrap               8m40s
    ceph-client             8m40s
    ceph-hci-pre            8m40s
    configure-network       8m40s
    configure-os            8m40s
    ...
    Copy to Clipboard Toggle word wrap

5.4.4. Example OpenStackDataPlaneNodeSet CR for unprovisioned nodes

The following example OpenStackDataPlaneNodeSet CR creates a node set from unprovisioned Compute nodes with some node-specific configuration. The unprovisioned Compute nodes are provisioned when the node set is created. The example includes optional fields. Review the example and update the optional fields to the correct values for your environment or remove them before using the example in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.

Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.

Note

The following variables are autogenerated from IPAM and DNS and are not provided by the user:

  • ctlplane_dns_nameservers
  • dns_search_domains
  • ctlplane_host_routes
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
  name: openstack-data-plane
  namespace: openstack
spec:
  env:
    - name: ANSIBLE_FORCE_COLOR
      value: "True"
  networkAttachments:
    - ctlplane
  preProvisioned: false
  baremetalSetTemplate:
    deploymentSSHSecret: dataplane-ansible-ssh-private-key-secret
    bmhNamespace: openstack
    cloudUserName: cloud-admin
    bmhLabelSelector:
      app: openstack
    ctlplaneInterface: enp1s0
  nodeTemplate:
    ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret
    extraMounts:
      - extraVolType: Logs
        volumes:
        - name: ansible-logs
          persistentVolumeClaim:
            claimName: <pvc_name>
        mounts:
        - name: ansible-logs
          mountPath: "/runner/artifacts"
    managementNetwork: ctlplane
    ansible:
      ansibleUser: cloud-admin
      ansiblePort: 22
      ansibleVarsFrom:
        - secretRef:
            name: subscription-manager
        - secretRef:
            name: redhat-registry
      ansibleVars:
        rhc_release: 9.4
        rhc_repositories:
            - {name: "*", state: disabled}
            - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
            - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
            - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
            - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
            - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
            - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
        edpm_bootstrap_release_version_package: []
        edpm_network_config_os_net_config_mappings:
          edpm-compute-0:
            nic1: 52:54:04:60:55:22
          edpm-compute-1:
            nic1: 52:54:04:60:55:22
        neutron_physical_bridge_name: br-ex
        neutron_public_interface_name: eth0
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: ovs_bridge
            name: {{ neutron_physical_bridge_name }}
            mtu: {{ min_viable_mtu }}
            use_dhcp: false
            dns_servers: {{ ctlplane_dns_nameservers }}
            domain: {{ dns_search_domains }}
            addresses:
            - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
            routes: {{ ctlplane_host_routes }}
            members:
            - type: interface
              name: nic1
              mtu: {{ min_viable_mtu }}
              # force the MAC address of the bridge to this interface
              primary: true
          {% for network in nodeset_networks %}
            - type: vlan
              mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
              vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
              addresses:
              - ip_netmask:
                  {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
              routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
          {% endfor %}
  nodes:
    edpm-compute-0:
      hostName: edpm-compute-0
      networks:
      - name: ctlplane
        subnetName: subnet1
        defaultRoute: true
        fixedIP: 192.168.122.100
      - name: internalapi
        subnetName: subnet1
      - name: storage
        subnetName: subnet1
      - name: tenant
        subnetName: subnet1
      ansible:
        ansibleHost: 192.168.122.100
        ansibleUser: cloud-admin
        ansibleVars:
          fqdn_internal_api: edpm-compute-0.example.com
      bmhLabelSelector:
        nodeName: edpm-compute-0
    edpm-compute-1:
      hostName: edpm-compute-1
      networks:
      - name: ctlplane
        subnetName: subnet1
        defaultRoute: true
        fixedIP: 192.168.122.101
      - name: internalapi
        subnetName: subnet1
      - name: storage
        subnetName: subnet1
      - name: tenant
        subnetName: subnet1
      ansible:
        ansibleHost: 192.168.122.101
        ansibleUser: cloud-admin
        ansibleVars:
          fqdn_internal_api: edpm-compute-1.example.com
      bmhLabelSelector:
        nodeName: edpm-compute-1
Copy to Clipboard Toggle word wrap

5.4.5. How to prevent asymmetric routing

If the Red Hat OpenShift Container Platform (RHOCP) cluster nodes have an interface with an IP address in the same IP subnet as that used by the nodes when provisioning, it causes asymmetric traffic. Therefore, if you use the ctlplane interface for provisioning and you have rp_filter configured on the kernel to enable Reverse Path Forwarding (RPF), then the reverse path filtering logic drops traffic. Implement one of the following methods to prevent traffic being dropped because of the RPF filter:

  • Use a dedicated NIC on the network where RHOCP binds the provisioning service, that is, the RHOCP machine network or a dedicated RHOCP provisioning network.
  • Use a dedicated NIC on a network that is reachable through routing on the RHOCP master nodes. For information about how to add routes to your RHOCP networks, see Adding routes to the RHOCP networks in Customizing the Red Hat OpenStack Services on OpenShift deployment.
  • Use a shared NIC for provisioning and the RHOSO ctlplane interface. You can use one of the following methods to configure a shared NIC:

    • Configure your network to support two IP ranges by configuring two IP addresses on the router interface: one in the address range you use for the ctlplane network and the other in the address range that you use for provisioning.
    • Configure a DHCP server to allocate an address range for provisioning that is different from the ctlplane address range.
    • If a DHCP server is not available, configure the preprovisioningNetworkData field on the BareMetalHost CRs. For information about how to configure the preprovisioningNetworkData field, see Configuring preprovisioningNetworkData on the BareMetalHost CRs.
  • If your environment has RHOCP master and worker nodes that are not connected to the network used by the EDPM nodes, you can set the nodeSelector field on the OpenStackProvisionServer CR to place it on a worker node that does not have an interface with an IP address in the same IP subnet as that used by the nodes when provisioning.

If you use the ctlplane interface for provisioning and you have rp_filter configured on the kernel to enable Reverse Path Forwarding (RPF), then the reverse path filtering logic drops traffic. To prevent traffic being dropped because of the RPF filter, you can configure the preprovisioningNetworkData field on the BareMetalHost CRs.

Procedure

  1. Create a Secret CR with preprovisioningNetworkData in nmstate format for each BareMetalHost CR:

    apiVersion: v1
    kind: Secret
    metadata:
      name: leaf0-0-preprovision-network-data
      namespace: openstack
    stringData:
      nmstate: |
        interfaces:
          - name: enp5s0
            type: ethernet
            state: up
            ipv4:
              enabled: true
              address:
                - ip: 192.168.130.100
                  prefix-length: 24
        dns-resolver:
          config:
            server:
              - 192.168.122.1
        routes:
          config:
            - destination: 0.0.0.0/0
              next-hop-address: 192.168.130.1
              next-hop-interface: enp5s0
    type: Opaque
    Copy to Clipboard Toggle word wrap
  2. Create the Secret resources:

    $ oc create -f secret_leaf0-0.yaml
    Copy to Clipboard Toggle word wrap
  3. Open the BareMetalHost CR file, for example, bmh_nodes.yaml.
  4. Add the preprovisioningNetworkDataName field to each BareMetalHost CR defined for each node in the bmh_nodes.yaml file:

    apiVersion: metal3.io/v1alpha1
    kind: BareMetalHost
    metadata:
      annotations:
        inspect.metal3.io: disabled
      labels:
        app: openstack
        nodeset: leaf0
      name: leaf0-0
      namespace: openstack
    spec:
      architecture: x86_64
      automatedCleaningMode: metadata
      bmc:
        address: redfish-virtualmedia+http://sushy.utility:8000/redfish/v1/Systems/df2bf92f-3e2c-47e1-b1fa-0d2e06bd1b1d
        credentialsName: bmc-secret
      bootMACAddress: 52:54:04:15:a8:d9
      bootMode: UEFI
      online: false
      preprovisioningNetworkDataName: leaf0-0-preprovision-network-data
      rootDeviceHints:
        deviceName: /dev/sda
    Copy to Clipboard Toggle word wrap
  5. Update the BareMetalHost CRs:

    $ oc apply -f bmh_nodes.yaml
    Copy to Clipboard Toggle word wrap

5.5. OpenStackDataPlaneNodeSet CR spec properties

The following sections detail the OpenStackDataPlaneNodeSet CR spec properties you can configure.

5.5.1. nodeTemplate

Defines the common attributes for the nodes in this OpenStackDataPlaneNodeSet. You can override these common attributes in the definition for each individual node.

Expand
Table 5.1. nodeTemplate properties
FieldDescription

ansibleSSHPrivateKeySecret

Name of the private SSH key secret that contains the private SSH key for connecting to nodes.

Secret name format: Secret.data.ssh-privatekey

For more information, see Creating an SSH authentication secret.

Default: dataplane-ansible-ssh-private-key-secret

managementNetwork

Name of the network to use for management (SSH/Ansible). Default: ctlplane

networks

Network definitions for the OpenStackDataPlaneNodeSet.

ansible

Ansible configuration options. For more information, see ansible properties.

extraMounts

The files to mount into an Ansible Execution Pod.

userData

UserData configuration for the OpenStackDataPlaneNodeSet.

networkData

NetworkData configuration for the OpenStackDataPlaneNodeSet.

5.5.2. nodes

Defines the node names and node-specific attributes for the nodes in this OpenStackDataPlaneNodeSet. Overrides the common attributes defined in the nodeTemplate.

Expand
Table 5.2. nodes properties
FieldDescription

ansible

Ansible configuration options. For more information, see ansible properties.

extraMounts

The files to mount into an Ansible Execution Pod.

hostName

The node name.

managementNetwork

Name of the network to use for management (SSH/Ansible).

networkData

NetworkData configuration for the node.

networks

Instance networks.

userData

Node-specific user data.

5.5.3. ansible

Defines the group of Ansible configuration options.

Expand
Table 5.3. ansible properties
FieldDescription

ansibleUser

The user associated with the secret you created in Creating the data plane secrets. Default: rhel-user

ansibleHost

SSH host for the Ansible connection.

ansiblePort

SSH port for the Ansible connection.

ansibleVars

The Ansible variables that customize the set of nodes. You can use this property to configure any custom Ansible variable, including the Ansible variables available for each edpm-ansible role. For a complete list of Ansible variables by role, see the edpm-ansible documentation.

Note

The ansibleVars parameters that you can configure for an OpenStackDataPlaneNodeSet CR are determined by the services defined for the OpenStackDataPlaneNodeSet. The OpenStackDataPlaneService CRs call the Ansible playbooks from the edpm-ansible playbook collection, which include the roles that are executed as part of the data plane service.

ansibleVarsFrom

A list of sources to populate Ansible variables from. Values defined by an AnsibleVars with a duplicate key take precedence. For more information, see ansibleVarsFrom properties.

5.5.4. ansibleVarsFrom

Defines the list of sources to populate Ansible variables from.

Expand
Table 5.4. ansibleVarsFrom properties
FieldDescription

prefix

An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.

configMapRef

The ConfigMap CR to select the ansibleVars from.

secretRef

The Secret CR to select the ansibleVars from.

5.6. Deploying the data plane

You use the OpenStackDataPlaneDeployment custom resource definition (CRD) to configure the services on the data plane nodes and deploy the data plane. You control the execution of Ansible on the data plane by creating OpenStackDataPlaneDeployment custom resources (CRs). Each OpenStackDataPlaneDeployment CR models a single Ansible execution. Create an OpenStackDataPlaneDeployment CR to deploy each of your OpenStackDataPlaneNodeSet CRs.

Note

When the OpenStackDataPlaneDeployment successfully completes execution, it does not automatically execute the Ansible again, even if the OpenStackDataPlaneDeployment or related OpenStackDataPlaneNodeSet resources are changed. To start another Ansible execution, you must create another OpenStackDataPlaneDeployment CR. Remove any failed OpenStackDataPlaneDeployment CRs in your environment before creating a new one to allow the new OpenStackDataPlaneDeployment to run Ansible with an updated Secret.

Procedure

  1. Create a file on your workstation named openstack_data_plane_deploy.yaml to define the OpenStackDataPlaneDeployment CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: data-plane-deploy
      namespace: openstack
    Copy to Clipboard Toggle word wrap
    • metadata.name: The OpenStackDataPlaneDeployment CR name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the node sets in the deployment.
  2. Add all the OpenStackDataPlaneNodeSet CRs that you want to deploy:

    spec:
      nodeSets:
        - openstack-data-plane
        - <nodeSet_name>
        - ...
        - <nodeSet_name>
    Copy to Clipboard Toggle word wrap
    • Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment.
  3. Save the openstack_data_plane_deploy.yaml deployment file.
  4. Deploy the data plane:

    $ oc create -f openstack_data_plane_deploy.yaml -n openstack
    Copy to Clipboard Toggle word wrap

    You can view the Ansible logs while the deployment executes:

    $ oc get pod -l app=openstackansibleee -w
    $ oc logs -l app=openstackansibleee -f --max-log-requests 10
    Copy to Clipboard Toggle word wrap

    If the oc logs command returns an error similar to the following error, increase the --max-log-requests value:

    error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
    Copy to Clipboard Toggle word wrap
  5. Verify that the data plane is deployed:

    $ oc wait openstackdataplanedeployment data-plane-deploy --for=condition=Ready --timeout=<timeout_value>
    $ oc wait openstackdataplanenodeset openstack-data-plane --for=condition=Ready --timeout=<timeout_value>
    Copy to Clipboard Toggle word wrap
    • Replace <timeout_value> with a value in minutes you want the command to wait for completion of the task. For example, if you want the command to wait 60 minutes, you would use the value 60m. If the completion status of SetupReady for oc wait openstackdataplanedeployment or NodeSetReady for oc wait openstackdataplanenodeset is not returned in this time frame, the command returns a timeout error. Use a value that is appropriate to the size of your deployment. Give larger deployments more time to complete deployment tasks.

      For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

  6. Map the Compute nodes to the Compute cell that they are connected to:

    $ oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose
    Copy to Clipboard Toggle word wrap

    If you did not create additional cells, this command maps the Compute nodes to cell1.

  7. Access the remote shell for the openstackclient pod and verify that the deployed Compute nodes are visible on the control plane:

    $ oc rsh -n openstack openstackclient
    $ openstack hypervisor list
    Copy to Clipboard Toggle word wrap

    If some Compute nodes are missing form the hypervisor list, retry the previous step. If the Compute nodes are still missing from the list, check the status and health of the nova-compute services on the deployed data plane nodes.

  8. Verify that the hypervisor hostname is a fully qualified domain name (FQDN):

    $ hostname -f
    Copy to Clipboard Toggle word wrap

    If the hypervisor hostname is not an FQDN, for example, if it was registered as a short name or full name instead, contact Red Hat Support.

5.7. Data plane conditions and states

Each data plane resource has a series of conditions within their status subresource that indicates the overall state of the resource, including its deployment progress.

For an OpenStackDataPlaneNodeSet, until an OpenStackDataPlaneDeployment has been started and finished successfully, the Ready condition is False. When the deployment succeeds, the Ready condition is set to True. A subsequent deployment sets the Ready condition to False until the deployment succeeds, when the Ready condition is set to True.

Expand
Table 5.5. OpenStackDataPlaneNodeSet CR conditions
ConditionDescription

Ready

  • "True": The OpenStackDataPlaneNodeSet CR is successfully deployed.
  • "False": The deployment is not yet requested or has failed, or there are other failed conditions.

SetupReady

"True": All setup tasks for a resource are complete. Setup tasks include verifying the SSH key secret, verifying other fields on the resource, and creating the Ansible inventory for each resource. Each service-specific condition is set to "True" when that service completes deployment. You can check the service conditions to see which services have completed their deployment, or which services failed.

DeploymentReady

"True": The NodeSet has been successfully deployed.

InputReady

"True": The required inputs are available and ready.

NodeSetDNSDataReady

"True": DNSData resources are ready.

NodeSetIPReservationReady

"True": The IPSet resources are ready.

NodeSetBaremetalProvisionReady

"True": Bare-metal nodes are provisioned and ready.

Expand
Table 5.6. OpenStackDataPlaneNodeSet status fields
Status fieldDescription

Deployed

  • "True": The OpenStackDataPlaneNodeSet CR is successfully deployed.
  • "False": The deployment is not yet requested or has failed, or there are other failed conditions.

DNSClusterAddresses

 

CtlplaneSearchDomain

 
Expand
Table 5.7. OpenStackDataPlaneDeployment CR conditions
ConditionDescription

Ready

  • "True": The data plane is successfully deployed.
  • "False": The data plane deployment failed, or there are other failed conditions.

DeploymentReady

"True": The data plane is successfully deployed.

InputReady

"True": The required inputs are available and ready.

<NodeSet> Deployment Ready

"True": The deployment has succeeded for the named NodeSet, indicating all services for the NodeSet have succeeded.

<NodeSet> <Service> Deployment Ready

"True": The deployment has succeeded for the named NodeSet and Service. Each <NodeSet> <Service> Deployment Ready specific condition is set to "True" as that service completes successfully for the named NodeSet. Once all services are complete for a NodeSet, the <NodeSet> Deployment Ready condition is set to "True". The service conditions indicate which services have completed their deployment, or which services failed and for which NodeSets.

Expand
Table 5.8. OpenStackDataPlaneDeployment status fields
Status fieldDescription

Deployed

  • "True": The data plane is successfully deployed. All Services for all NodeSets have succeeded.
  • "False": The deployment is not yet requested or has failed, or there are other failed conditions.
Expand
Table 5.9. OpenStackDataPlaneService CR conditions
ConditionDescription

Ready

"True": The service has been created and is ready for use. "False": The service has failed to be created.

5.8. Troubleshooting data plane creation and deployment

To troubleshoot a deployment when services are not deploying or operating correctly, you can check the job condition message for the service, and you can check the logs for a node set.

5.8.1. Checking the job condition message for a service

Each data plane deployment in the environment has associated services. Each of these services have a job condition message that matches the current status of the AnsibleEE job executing for that service. You can use this information to troubleshoot deployments when services are not deploying or operating correctly.

Procedure

  1. Determine the name and status of all deployments:

    $ oc get openstackdataplanedeployment
    Copy to Clipboard Toggle word wrap

    The following example output shows two deployments currently in progress:

    $ oc get openstackdataplanedeployment
    
    NAME                   NODESETS             STATUS   MESSAGE
    edpm-compute   ["openstack-edpm-ipam"]   False    Deployment in progress
    Copy to Clipboard Toggle word wrap
  2. Retrieve and inspect Ansible execution jobs.

    The Kubernetes jobs are labelled with the name of the OpenStackDataPlaneDeployment. You can list jobs for each OpenStackDataPlaneDeployment by using the label:

     $ oc get job -l openstackdataplanedeployment=edpm-compute
     NAME                                                 STATUS     COMPLETIONS   DURATION   AGE
     bootstrap-edpm-compute-openstack-edpm-ipam           Complete   1/1           78s        25h
     configure-network-edpm-compute-openstack-edpm-ipam   Complete   1/1           37s        25h
     configure-os-edpm-compute-openstack-edpm-ipam        Complete   1/1           66s        25h
     download-cache-edpm-compute-openstack-edpm-ipam      Complete   1/1           64s        25h
     install-certs-edpm-compute-openstack-edpm-ipam       Complete   1/1           46s        25h
     install-os-edpm-compute-openstack-edpm-ipam          Complete   1/1           57s        25h
     libvirt-edpm-compute-openstack-edpm-ipam             Complete   1/1           2m37s      25h
     neutron-metadata-edpm-compute-openstack-edpm-ipam    Complete   1/1           61s        25h
     nova-edpm-compute-openstack-edpm-ipam                Complete   1/1           3m20s      25h
     ovn-edpm-compute-openstack-edpm-ipam                 Complete   1/1           78s        25h
     run-os-edpm-compute-openstack-edpm-ipam              Complete   1/1           33s        25h
     ssh-known-hosts-edpm-compute                         Complete   1/1           19s        25h
     telemetry-edpm-compute-openstack-edpm-ipam           Complete   1/1           2m5s       25h
     validate-network-edpm-compute-openstack-edpm-ipam    Complete   1/1           16s        25h
    Copy to Clipboard Toggle word wrap

    You can check logs by using oc logs -f job/<job-name>, for example, if you want to check the logs from the configure-network job:

     $ oc logs -f jobs/configure-network-edpm-compute-openstack-edpm-ipam | tail -n2
     PLAY RECAP *********************************************************************
     edpm-compute-0             : ok=22   changed=0    unreachable=0    failed=0    skipped=17   rescued=0    ignored=0
    Copy to Clipboard Toggle word wrap
5.8.1.1. Job condition messages

AnsibleEE jobs have an associated condition message that indicates the current state of the service job. This condition message is displayed in the MESSAGE field of the oc get job <job_name> command output. Jobs return one of the following conditions when queried:

  • Job not started: The job has not started.
  • Job not found: The job could not be found.
  • Job is running: The job is currently running.
  • Job complete: The job execution is complete.
  • Job error occurred <error_message>: The job stopped executing unexpectedly. The <error_message> is replaced with a specific error message.

To further investigate a service that is displaying a particular job condition message, view its logs by using the command oc logs job/<service>. For example, to view the logs for the repo-setup-openstack-edpm service, use the command oc logs job/repo-setup-openstack-edpm.

5.8.2. Checking the logs for a node set

You can access the logs for a node set to check for deployment issues.

Procedure

  1. Retrieve pods with the OpenStackAnsibleEE label:

    $ oc get pods -l app=openstackansibleee
    configure-network-edpm-compute-j6r4l   0/1     Completed           0          3m36s
    validate-network-edpm-compute-6g7n9    0/1     Pending             0          0s
    validate-network-edpm-compute-6g7n9    0/1     ContainerCreating   0          11s
    validate-network-edpm-compute-6g7n9    1/1     Running             0          13s
    Copy to Clipboard Toggle word wrap
  2. SSH into the pod you want to check:

    1. Pod that is running:

      $ oc rsh validate-network-edpm-compute-6g7n9
      Copy to Clipboard Toggle word wrap
    2. Pod that is not running:

      $ oc debug configure-network-edpm-compute-j6r4l
      Copy to Clipboard Toggle word wrap
  3. List the directories in the /runner/artifacts mount:

    $ ls /runner/artifacts
    configure-network-edpm-compute
    validate-network-edpm-compute
    Copy to Clipboard Toggle word wrap
  4. View the stdout for the required artifact:

    $ cat /runner/artifacts/configure-network-edpm-compute/stdout
    Copy to Clipboard Toggle word wrap

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Volver arriba
Red Hat logoGithubredditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar. Explore nuestras recientes actualizaciones.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

Theme

© 2025 Red Hat