Deploying Red Hat OpenStack Services on OpenShift


Red Hat OpenStack Services on OpenShift 18.0

Deploying a Red Hat OpenStack Services on OpenShift environment on a Red Hat OpenShift Container Platform cluster

OpenStack Documentation Team

Abstract

Learn how to install the OpenStack Operator for Red Hat OpenStack Services on OpenShift (RHOSO), create a RHOSO control plane on a Red Hat OpenShift Container Platform cluster, and use the OpenStack Operator to deploy a RHOSO data plane.

Providing feedback on Red Hat documentation

We appreciate your feedback. Tell us how we can improve the documentation.

To provide documentation feedback for Red Hat OpenStack Services on OpenShift (RHOSO), create a Jira issue in the OSPRH Jira project.

Procedure

  1. Log in to the Red Hat Atlassian Jira.
  2. Click the following link to open a Create Issue page: Create issue
  3. Select Red Hat OpenStack Services on OpenShift as the Project.
  4. Select Bug as the Issue Type.
  5. Click Next.
  6. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue.
  7. Select documentation as the Component.
  8. Click Create.
  9. Review the details of the bug you created.

You install the Red Hat OpenStack Services on OpenShift (RHOSO) OpenStack Operator (openstack-operator) and create the RHOSO control plane on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. You install the OpenStack Operator by using the RHOCP OperatorHub. You perform the control plane installation tasks and all data plane creation tasks on a workstation that has access to the RHOCP cluster.

For information about mapping RHOSO versions to OpenStack Operators and OpenStackVersion Custom Resources (CRs), see the Red Hat Knowledgebase article How RHOSO versions map to OpenStack Operators and OpenStackVersion CRs.

1.1. Prerequisites

You can use the Red Hat OpenShift Container Platform (RHOCP) web console to install the OpenStack Operator (openstack-operator) on your RHOCP cluster from the OperatorHub. After you install the Operator, you configure a single instance of the OpenStack Operator initialization resource, OpenStack, to start the OpenStack Operator on your cluster.

Procedure

  1. Log in to the RHOCP web console as a user with cluster-admin permissions.
  2. Select Operators → OperatorHub.
  3. In the Filter by keyword field, type OpenStack.
  4. Click the OpenStack Operator tile with the Red Hat source label.
  5. Read the information about the Operator and click Install.
  6. On the Install Operator page, select "Operator recommended Namespace: openstack-operators" from the Installed Namespace list.
  7. On the Install Operator page, select "Manual" from the Update approval list. For information about how to manually approve a pending Operator update, see Manually approving a pending Operator update in the RHOCP Operators guide.
  8. Click Install to make the Operator available to the openstack-operators namespace. The OpenStack Operator is installed when the Status is Succeeded.
  9. Click Create OpenStack to open the Create OpenStack page.
  10. On the Create OpenStack page, click Create to create an instance of the OpenStack Operator initialization resource. The OpenStack Operator is ready to use when the Status of the openstack instance is Conditions: Ready.

You can use the Red Hat OpenShift Container Platform (RHOCP) CLI (oc) to install the OpenStack Operator (openstack-operator) on your RHOCP cluster from the OperatorHub.

To install the OpenStack Operator by using the CLI, you create the openstack-operators namespace for the Red Hat OpenStack Platform (RHOSP) service Operators. You then create the OperatorGroup and Subscription custom resources (CRs) within the namespace. After you install the Operator, you configure a single instance of the OpenStack Operator initialization resource, OpenStack, to start the OpenStack Operator on your cluster.

Procedure

  1. Create the openstack-operators namespace for the RHOSP operators:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Namespace
    metadata:
      name: openstack-operators
    spec:
      finalizers:
      - kubernetes
    EOF
  2. Create the OperatorGroup CR in the openstack-operators namespace:

    $ cat << EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: openstack
      namespace: openstack-operators
    EOF
  3. Create the Subscription CR that subscribes to openstack-operator:

    $ cat << EOF| oc apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: openstack-operator
      namespace: openstack-operators
    spec:
      name: openstack-operator
      channel: stable-v1.0
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      installPlanApproval: Manual
    EOF
  4. Wait for the install plan to be created:

    $ oc get installplan -n openstack-operators -o json | jq -r '.items[] | select(.spec.approval=="Manual" and .spec.approved==false) | .metadata.name' | head -n1
  5. Approve the install plan:

    $ oc patch installplan <install_plan_name> -n openstack-operators --type merge -p '{"spec":{"approved":true}}'
  6. Verify that the OpenStack Operator is installed:

    $ oc wait csv -n openstack-operators \
     -l operators.coreos.com/openstack-operator.openstack-operators="" \
     --for jsonpath='{.status.phase}'=Succeeded
  7. Create an instance of the openstack-operator:

    $ cat << EOF | oc apply -f -
    apiVersion: operator.openstack.org/v1beta1
    kind: OpenStack
    metadata:
      name: openstack
      namespace: openstack-operators
    EOF
  8. Confirm that the Openstack Operator is deployed:

    $ oc wait openstack/openstack -n openstack-operators --for condition=Ready --timeout=500s

Additional resources

You install Red Hat OpenStack Services on OpenShift (RHOSO) on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. To prepare for installing and deploying your RHOSO environment, you must configure the RHOCP worker nodes and the RHOCP networks on your RHOCP cluster.

Red Hat OpenStack Services on OpenShift (RHOSO) services run on Red Hat OpenShift Container Platform (RHOCP) worker nodes. By default, the OpenStack Operator deploys RHOSO services on any worker node. You can use node labels in your OpenStackControlPlane custom resource (CR) to specify which RHOCP nodes host the RHOSO services. By pinning some services to specific infrastructure nodes rather than running the services on all of your RHOCP worker nodes, you optimize the performance of your deployment.

You can create new labels for the RHOCP nodes, or you can use the existing labels, and then specify those labels in the OpenStackControlPlane CR by using the nodeSelector field. For example, the Block Storage service (cinder) has different requirements for each of its services:

  • The cinder-scheduler service is a very light service with low memory, disk, network, and CPU usage.
  • The cinder-api service has high network usage due to resource listing requests.
  • The cinder-volume service has high disk and network usage because many of its operations are in the data path, such as offline volume migration, and creating a volume from an image.
  • The cinder-backup service has high memory, network, and CPU requirements.

Therefore, you can pin the cinder-api, cinder-volume, and cinder-backup services to dedicated nodes and let the OpenStack Operator place the cinder-scheduler service on a node that has capacity.

Tip

Alternatively, you can create Topology CRs and use the topologyRef field in your OpenStackControlPlane CR to control service pod placement after your RHOCP cluster has been prepared. For more information, see Controlling service pod placement with Topology CRs.

2.2. Creating the openstack namespace

You must create a namespace within your Red Hat OpenShift Container Platform (RHOCP) environment for the service pods of your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. The service pods of each RHOSO deployment exist in their own namespace within the RHOCP environment.

Prerequisites

  • You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges.

Procedure

  1. Create the openstack project for the deployed RHOSO environment:

    $ oc new-project openstack
  2. Ensure the openstack namespace is labeled to enable privileged pod creation by the OpenStack Operators:

    $ oc get namespace openstack -ojsonpath='{.metadata.labels}' | jq
    {
      "kubernetes.io/metadata.name": "openstack",
      "pod-security.kubernetes.io/enforce": "privileged",
      "security.openshift.io/scc.podSecurityLabelSync": "false"
    }

    If the security context constraint (SCC) is not "privileged", use the following commands to change it:

    $ oc label ns openstack security.openshift.io/scc.podSecurityLabelSync=false --overwrite
    $ oc label ns openstack pod-security.kubernetes.io/enforce=privileged --overwrite
  3. Optional: To remove the need to specify the namespace when executing commands on the openstack namespace, set the default namespace to openstack:

    $ oc project openstack

You must create a Secret custom resource (CR) to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods. The following procedure creates a Secret CR with the required password formats for each service.

For an example Secret CR that generates the required passwords and fernet key for you, see Example Secret CR for secure access to the RHOSO service pods.

Warning

You cannot change a service password once the control plane is deployed. If a service password is changed in osp-secret after deploying the control plane, the service is reconfigured to use the new password but the password is not updated in the Identity service (keystone). This results in a service outage.

Prerequisites

  • You have installed python3-cryptography.

Procedure

  1. Create a Secret CR on your workstation, for example, openstack_service_secret.yaml.
  2. Add the following initial configuration to openstack_service_secret.yaml:

    apiVersion: v1
    data:
      AdminPassword: <base64_password>
      AodhPassword: <base64_password>
      BarbicanPassword: <base64_password>
      BarbicanSimpleCryptoKEK: <base64_fernet_key>
      CeilometerPassword: <base64_password>
      CinderPassword: <base64_password>
      DbRootPassword: <base64_password>
      DesignatePassword: <base64_password>
      GlancePassword: <base64_password>
      HeatAuthEncryptionKey: <base64_password>
      HeatPassword: <base64_password>
      IronicInspectorPassword: <base64_password>
      IronicPassword: <base64_password>
      ManilaPassword: <base64_password>
      MetadataSecret: <base64_password>
      NeutronPassword: <base64_password>
      NovaPassword: <base64_password>
      OctaviaPassword: <base64_password>
      PlacementPassword: <base64_password>
      SwiftPassword: <base64_password>
    kind: Secret
    metadata:
      name: osp-secret
      namespace: openstack
    type: Opaque
    • Replace <base64_password> with a 32-character key that is base64 encoded.

      Note

      The HeatAuthEncryptionKey password must be a 32-character key for Orchestration service (heat) encryption. If you increase the length of the passwords for all other services, ensure that the HeatAuthEncryptionKey password remains at length 32.

      You can use the following command to manually generate a base64 encoded password:

      $ echo -n <password> | base64

      Alternatively, if you are using a Linux workstation and you are generating the Secret CR by using a Bash command such as cat, you can replace <base64_password> with the following command to auto-generate random passwords for each service:

      $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
    • Replace the <base64_fernet_key> with a base64 encoded fernet key. You can use the following command to manually generate it:

      $(python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))" | base64)
  3. Create the Secret CR in the cluster:

    $ oc create -f openstack_service_secret.yaml -n openstack
  4. Verify that the Secret CR is created:

    $ oc describe secret osp-secret -n openstack

You must create a Secret custom resource (CR) file to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods.

If you are using a Linux workstation, you can create a Secret CR file called openstack_service_secret.yaml by using the following Bash cat command that generates the required passwords and fernet key for you:

$ cat <<EOF > openstack_service_secret.yaml
apiVersion: v1
data:
  AdminPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  AodhPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  BarbicanPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  BarbicanSimpleCryptoKEK: $(python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))" | base64)
  CeilometerPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  CinderPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  DbRootPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  DesignatePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  GlancePassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  HeatAuthEncryptionKey: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  HeatPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  IronicInspectorPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  IronicPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  ManilaPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  MetadataSecret: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  NeutronPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  NovaPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  OctaviaHeartbeatKey: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  OctaviaPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  PlacementPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
  SwiftPassword: $(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)
kind: Secret
metadata:
  name: osp-secret
  namespace: openstack
type: Opaque
EOF

To prepare for configuring and deploying your Red Hat OpenStack Services on OpenShift (RHOSO) environment, you must configure the Red Hat OpenShift Container Platform (RHOCP) networks on your RHOCP cluster.

Note

If you need a centralized gateway for connection to external networks, you can add OVN gateways to the control plane or to dedicated Networker nodes on the data plane. For information about adding optional OVN gateways to the control plane, see Configuring OVN gateways for a Red Hat OpenStack Services on OpenShift deployment.

Red Hat OpenStack Services on OpenShift (RHOSO) requires the following physical data center networks.

Control plane network
Used by the OpenStack Operator for Ansible SSH access to deploy and connect to the data plane nodes from the Red Hat OpenShift Container Platform (RHOCP) environment. This network is also used by data plane nodes for live migration of instances.
Designate network
Used internally by the RHOSO DNS service (designate) to manage the DNS servers. For more information, see Designate networks in Configuring DNS as a service.
Designateext network
Used to provide external access to the DNS service resolver and the DNS servers.
External network

An optional network that is used when required for your environment. For example, you might create an external network for any of the following purposes:

  • To provide virtual machine instances with Internet access.
  • To create flat provider networks that are separate from the control plane.
  • To configure VLAN provider networks on a separate bridge from the control plane.
  • To provide access to virtual machine instances with floating IPs on a network other than the control plane network.

    Note

    When an external network is used for workloads, an OVN gateway is required in some use cases. For more information, see on use cases and available options, see Configuring a control plane OVN gateway with a dedicated NIC in Configuring networking services.

Internal API network
Used for internal communication between RHOSO components.
Octavia network
Used to connect Load-balancing service (octavia) controllers running in the control plane. For more information, see Octavia network in Configuring load balancing as a service.
Storage network
Used for block storage, RBD, NFS, FC, and iSCSI.
Storage Management network

An optional network that is used by storage components. For example, Red Hat Ceph Storage uses the Storage Management network in a hyperconverged infrastructure (HCI) environment as the cluster_network to replicate data.

Note

For more information about Red Hat Ceph Storage network configuration, see "Ceph network configuration" in the Red Hat Ceph Storage Configuration Guide:

Tenant (project) network
Used for data communication between virtual machine instances within the cloud deployment.

Figure 3.1. Physical networks for RHOSO

Physical networks for RHOSO

The following table details the default networks used in a RHOSO deployment.

Note

By default, the control plane and external networks do not use VLANs. Networks that do not use VLANs must be placed on separate NICs. You can use a VLAN for the control plane network on new RHOSO deployments. You can also use the Native VLAN on a trunked interface as the non-VLAN network. For example, you can have the control plane and the internal API on one NIC, and the external network with no VLAN on a separate NIC.

Expand
Table 3.1. Default RHOSO networks
Network nameCIDRNetConfig allocationRangeMetalLB IPAddressPool rangenet-attach-def ipam rangeOCP worker nncp range

ctlplane

192.168.122.0/24

192.168.122.100 - 192.168.122.250

192.168.122.80 - 192.168.122.90

192.168.122.30 - 192.168.122.70

192.168.122.10 - 192.168.122.20

designate

172.26.0.0/24

n/a

n/a

172.26.0.30 - 172.26.0.70

172.26.0.10 - 172.26.0.20

designateext

172.34.0.0/24

n/a

172.34.0.80 - 172.34.0.120

172.34.0.30 - 172.34.0.70

172.34.0.10 - 172.34.0.20

external

10.0.0.0/24

10.0.0.100 - 10.0.0.250

n/a

n/a

n/a

internalapi

172.17.0.0/24

172.17.0.100 - 172.17.0.250

172.17.0.80 - 172.17.0.90

172.17.0.30 - 172.17.0.70

172.17.0.10 - 172.17.0.20

octavia

172.23.0.0/24

n/a

n/a

172.23.0.30 - 172.23.0.70

n/a

storage

172.18.0.0/24

172.18.0.100 - 172.18.0.250

n/a

172.18.0.30 - 172.18.0.70

172.18.0.10 - 172.18.0.20

storageMgmt

172.20.0.0/24

172.20.0.100 - 172.20.0.250

n/a

172.20.0.30 - 172.20.0.70

172.20.0.10 - 172.20.0.20

tenant

172.19.0.0/24

172.19.0.100 - 172.19.0.250

n/a

172.19.0.30 - 172.19.0.70

172.19.0.10 - 172.19.0.20

3.2. Preparing RHOCP for RHOSO networks

The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. A RHOSO environment uses isolated networks to separate different types of network traffic, which improves security, performance, and management. You must connect the RHOCP worker nodes to your isolated networks and expose the internal service endpoints on the isolated networks. The public service endpoints are exposed as RHOCP routes by default, because only routes are supported for public endpoints.

Important

The control plane interface name must be consistent across all nodes because network manifests reference the control plane interface name directly. If the control plane interface names are inconsistent, then the RHOSO environment fails to deploy. If the physical interface names are inconsistent on the nodes, you must create a Linux bond that configures a consistent alternative name for the physical interfaces that can be referenced by the other network manifests.

Note

The examples in the following procedures use IPv4 addresses. You can use IPv6 addresses instead of IPv4 addresses. Dual stack (IPv4 and IPv6) is available only on project (tenant) networks. For information about how to configure IPv6 addresses, see the following resources in the RHOCP Networking guide:

You use the NMState Operator to connect the RHOCP worker nodes to your isolated networks. Create a NodeNetworkConfigurationPolicy (nncp) CR to configure the interfaces for each isolated network on each worker node in the RHOCP cluster.

Procedure

  1. Create a NodeNetworkConfigurationPolicy (nncp) CR file on your workstation, for example, openstack-nncp.yaml.
  2. Retrieve the names of the worker nodes in the RHOCP cluster:

    $ oc get nodes -l node-role.kubernetes.io/worker -o jsonpath="{.items[*].metadata.name}"
  3. Discover the network configuration:

    $ oc get nns/<worker_node> -o yaml | more
    • Replace <worker_node> with the name of a worker node retrieved in step 2, for example, worker-1. Repeat this step for each worker node.
  4. In the nncp CR file, configure the interfaces for each isolated network on each worker node in the RHOCP cluster. For information about the default physical data center networks that must be configured with network isolation, see Networks for Red Hat OpenStack Services on OpenShift.

    In the following example, the nncp CR configures the enp6s0 interface for worker node 1, osp-enp6s0-worker-1, to use VLAN interfaces with IPv4 addresses for network isolation:

    apiVersion: nmstate.io/v1
    kind: NodeNetworkConfigurationPolicy
    metadata:
      name: osp-enp6s0-worker-1
    spec:
      desiredState:
        interfaces:
        - description: internalapi vlan interface
          ipv4:
            address:
            - ip: 172.17.0.10
              prefix-length: 24
            enabled: true
            dhcp: false
          ipv6:
            enabled: false
          name: internalapi
          state: up
          type: vlan
          vlan:
            base-iface: enp6s0
            id: 20
            reorder-headers: true
        - description: storage vlan interface
          ipv4:
            address:
            - ip: 172.18.0.10
              prefix-length: 24
            enabled: true
            dhcp: false
          ipv6:
            enabled: false
          name: storage
          state: up
          type: vlan
          vlan:
            base-iface: enp6s0
            id: 21
            reorder-headers: true
        - description: tenant vlan interface
          ipv4:
            address:
            - ip: 172.19.0.10
              prefix-length: 24
            enabled: true
            dhcp: false
          ipv6:
            enabled: false
          name: tenant
          state: up
          type: vlan
          vlan:
            base-iface: enp6s0
            id: 22
            reorder-headers: true
        - description: Configuring enp6s0
          ipv4:
            address:
            - ip: 192.168.122.10
              prefix-length: 24
            enabled: true
            dhcp: false
          ipv6:
            enabled: false
          mtu: 1500
          name: enp6s0
          state: up
          type: ethernet
        - description: octavia vlan interface
          name: octavia
          state: up
          type: vlan
          vlan:
            base-iface: enp6s0
            id: 24
            reorder-headers: true
        - bridge:
            options:
              stp:
                enabled: false
            port:
            - name: enp6s0.24
          description: Configuring bridge octbr
          mtu: 1500
          name: octbr
          state: up
          type: linux-bridge
        - description: designate vlan interface
          ipv4:
            address:
            - ip: 172.26.0.10
              prefix-length: "24"
            dhcp: false
            enabled: true
          ipv6:
            enabled: false
          mtu: 1500
          name: designate
          state: up
          type: vlan
          vlan:
            base-iface: enp7s0
            id: "25"
            reorder-headers: true
        - description: designate external vlan interface
          ipv4:
            address:
            - ip: 172.34.0.10
              prefix-length: "24"
            dhcp: false
            enabled: true
          ipv6:
            enabled: false
          mtu: 1500
          name: designateext
          state: up
          type: vlan
          vlan:
            base-iface: enp7s0
            id: "26"
            reorder-headers: true
      nodeSelector:
        kubernetes.io/hostname: worker-1
        node-role.kubernetes.io/worker: ""
  5. Create the nncp CR in the cluster:

    $ oc apply -f openstack-nncp.yaml
  6. Verify that the nncp CR is created:

    $ oc get nncp -w
    NAME                        STATUS        REASON
    osp-enp6s0-worker-1   Progressing   ConfigurationProgressing
    osp-enp6s0-worker-1   Progressing   ConfigurationProgressing
    osp-enp6s0-worker-1   Available     SuccessfullyConfigured

Create a NetworkAttachmentDefinition (net-attach-def) custom resource (CR) for each isolated network to attach the service pods to the networks.

Note

If you frequently recreate pods in your environment then use the Whereabouts reconciler to manage dynamic IP address assignments for the pods. For more information, see Creating a whereabouts-reconciler daemon set in the RHOCP Multiple networks guide.

Procedure

  1. Create a NetworkAttachmentDefinition (net-attach-def) CR file on your workstation, for example, openstack-net-attach-def.yaml.
  2. In the NetworkAttachmentDefinition CR file, configure a NetworkAttachmentDefinition resource for each isolated network to attach a service deployment pod to the network. The following examples create a NetworkAttachmentDefinition resource for the following networks:

    • internalapi, storage, ctlplane, and tenant networks of type macvlan.
    • octavia, the load-balancing management network, of type bridge. This network attachment connects pods that manage load balancer virtual machines (amphorae) and the Open vSwitch pods that are managed by the OVN operator.
    • designate network used internally by the DNS service (designate) to manage the DNS servers.
    • designateext network used to provide external access to the DNS service resolver and the DNS servers.
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: internalapi
      namespace: openstack
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "internalapi",
          "type": "macvlan",
          "master": "internalapi",
          "ipam": {
            "type": "whereabouts",
            "range": "172.17.0.0/24",
            "range_start": "172.17.0.30",
            "range_end": "172.17.0.70"
          }
        }
    ---
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: ctlplane
      namespace: openstack
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "ctlplane",
          "type": "macvlan",
          "master": "enp6s0",
          "ipam": {
            "type": "whereabouts",
            "range": "192.168.122.0/24",
            "range_start": "192.168.122.30",
            "range_end": "192.168.122.70"
          }
        }
    ---
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: storage
      namespace: openstack
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "storage",
          "type": "macvlan",
          "master": "storage",
          "ipam": {
            "type": "whereabouts",
            "range": "172.18.0.0/24",
            "range_start": "172.18.0.30",
            "range_end": "172.18.0.70"
          }
        }
    ---
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: tenant
      namespace: openstack
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "tenant",
          "type": "macvlan",
          "master": "tenant",
          "ipam": {
            "type": "whereabouts",
            "range": "172.19.0.0/24",
            "range_start": "172.19.0.30",
            "range_end": "172.19.0.70"
          }
        }
    ---
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      labels:
        osp/net: octavia
      name: octavia
      namespace: openstack
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "octavia",
          "type": "bridge",
          "bridge": "octbr",
          "ipam": {
            "type": "whereabouts",
            "range": "172.23.0.0/24",
            "range_start": "172.23.0.30",
            "range_end": "172.23.0.70",
            "routes": [
               {
                 "dst": "172.24.0.0/16",
                 "gw" : "172.23.0.150"
               }
             ]
          }
        }
    ---
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: designate
      namespace: openstack
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "designate",
          "type": "macvlan",
          "master": "designate",
          "ipam": {
            "type": "whereabouts",
            "range": "172.26.0.0/16",
            "range_start": "172.26.0.30",
            "range_end": "172.26.0.70"
          }
        }
    ---
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: designateext
      namespace: openstack
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "designateext",
          "type": "macvlan",
          "master": "designateext",
          "ipam": {
            "type": "whereabouts",
            "range": "172.34.0.0/16",
            "range_start": "172.34.0.30",
            "range_end": "172.34.0.70"
          }
        }
    • metadata.namespace: The namespace where the services are deployed.
    • "master": The node interface name associated with the network, as defined in the nncp CR.
    • "ipam": The whereabouts CNI IPAM plug-in assigns IPs to the created pods from the range .30 - .70.
    • "range_start" - "range_end": The IP address pool range must not overlap with the MetalLB IPAddressPool range and the NetConfig allocationRange.
  3. Create the NetworkAttachmentDefinition CR in the cluster:

    $ oc apply -f openstack-net-attach-def.yaml
  4. Verify that the NetworkAttachmentDefinition CR is created:

    $ oc get net-attach-def -n openstack

3.2.3. Preparing RHOCP for RHOSO network VIPS

You use the MetalLB Operator to expose internal service endpoints on the isolated networks. You must create an L2Advertisement resource to define how the Virtual IPs (VIPs) are announced, and an IPAddressPool resource to configure which IPs can be used as VIPs. In layer 2 mode, one node assumes the responsibility of advertising a service to the local network.

Procedure

  1. Create an IPAddressPool CR file on your workstation, for example, openstack-ipaddresspools.yaml.
  2. In the IPAddressPool CR file, configure an IPAddressPool resource on the isolated network to specify the IP address ranges over which MetalLB has authority:

    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      namespace: metallb-system
      name: ctlplane
    spec:
      addresses:
        - 192.168.122.80-192.168.122.90
      autoAssign: true
      avoidBuggyIPs: false
      serviceAllocation:
        namespaces:
          - openstack
    ---
    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      name: internalapi
      namespace: metallb-system
    spec:
      addresses:
        - 172.17.0.80-172.17.0.90
      autoAssign: true
      avoidBuggyIPs: false
    ---
    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      namespace: metallb-system
      name: designateext
    spec:
      addresses:
        - 172.34.0.80-172.34.0.120
      autoAssign: true
      avoidBuggyIPs: false
    ---
    • spec.addresses: The IPAddressPool range must not overlap with the whereabouts IPAM range and the NetConfig allocationRange.
    • spec.serviceAllocation: Specify the namespaces that can consume IP addresses from the IPAddressPool range. This is an optional field that you can configure to prevent non-RHOSO services hosted on your RHOCP cluster from consuming IP addresses from the IPAddressPool range.

    For information about how to configure the other IPAddressPool resource parameters, see Configuring MetalLB address pools in the RHOCP Networking guide.

  3. Create the IPAddressPool CR in the cluster:

    $ oc apply -f openstack-ipaddresspools.yaml
  4. Verify that the IPAddressPool CR is created:

    $ oc describe -n metallb-system IPAddressPool
  5. Create a L2Advertisement CR file on your workstation, for example, openstack-l2advertisement.yaml.
  6. In the L2Advertisement CR file, configure L2Advertisement CRs to define which node advertises a service to the local network. Create one L2Advertisement resource for each network.

    In the following example, each L2Advertisement CR specifies that the VIPs requested from the network address pools are announced on the interface that is attached to the VLAN:

    apiVersion: metallb.io/v1beta1
    kind: L2Advertisement
    metadata:
      name: ctlplane
      namespace: metallb-system
    spec:
      ipAddressPools:
      - ctlplane
      interfaces:
      - enp6s0
      nodeSelectors:
      - matchLabels:
          node-role.kubernetes.io/worker: ""
    ---
    apiVersion: metallb.io/v1beta1
    kind: L2Advertisement
    metadata:
      name: internalapi
      namespace: metallb-system
    spec:
      ipAddressPools:
      - internalapi
      interfaces:
      - internalapi
      nodeSelectors:
      - matchLabels:
          node-role.kubernetes.io/worker: ""
    ---
    apiVersion: metallb.io/v1beta1
    kind: L2Advertisement
    metadata:
      name: designateext
      namespace: metallb-system
    spec:
      ipAddressPools:
      - designateext
      interfaces:
      - designateext
      nodeSelectors:
      - matchLabels:
          node-role.kubernetes.io/worker: ""
    • spec.interfaces: The interface where the VIPs requested from the VLAN address pool are announced.

    For information about how to configure the other L2Advertisement resource parameters, see Configuring MetalLB with a L2 advertisement and label in the RHOCP Networking guide.

  7. Create the L2Advertisement CRs in the cluster:

    $ oc apply -f openstack-l2advertisement.yaml
  8. Verify that the L2Advertisement CRs are created:

    $ oc get -n metallb-system L2Advertisement
    NAME          IPADDRESSPOOLS    IPADDRESSPOOL SELECTORS   INTERFACES
    ctlplane      ["ctlplane"]                                ["enp6s0"]
    designateext  ["designateext"]                            ["designateext"]
    internalapi   ["internalapi"]                             ["internalapi"]
    storage       ["storage"]                                 ["storage"]
    tenant        ["tenant"]                                  ["tenant"]
  9. If your cluster has OVNKubernetes as the network back end, then you must enable global forwarding so that MetalLB can work on a secondary network interface.

    1. Check the network back end used by your cluster:

      $ oc get network.operator cluster --output=jsonpath='{.spec.defaultNetwork.type}'
    2. If the back end is OVNKubernetes, then run the following command to enable global IP forwarding:

      $ oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}' --type=merge

3.3. Creating the data plane network

To create the data plane network, you define a NetConfig custom resource (CR) and specify all the subnets for the data plane networks. You must define at least one control plane network for your data plane. You can also define VLAN networks to create network isolation for composable networks, such as internalapi, storage, and external. Each network definition must include the IP address assignment.

Tip

Use the following commands to view the NetConfig CRD definition and specification schema:

$ oc describe crd netconfig

$ oc explain netconfig.spec

Procedure

  1. Create a file named openstack_netconfig.yaml on your workstation.
  2. Add the following configuration to openstack_netconfig.yaml to create the NetConfig CR:

    apiVersion: network.openstack.org/v1beta1
    kind: NetConfig
    metadata:
      name: openstacknetconfig
      namespace: openstack
  3. In the openstack_netconfig.yaml file, define the topology for each data plane network. To use the default Red Hat OpenStack Services on OpenShift (RHOSO) networks, you must define a specification for each network. For information about the default RHOSO networks, see Networks for Red Hat OpenStack Services on OpenShift.

    Note

    If you are using pre-provisioned data plane nodes, then the control plane network and IP address must match the pre-provisioned data plane nodes. If the ctlplane network uses tagged VLANs, then the VLAN ID must also match the VLAN ID on the pre-provisioned data plane node.

    The following example creates isolated networks for the data plane:

    spec:
      networks:
      - name: ctlplane
        dnsDomain: ctlplane.example.com
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 192.168.122.120
            start: 192.168.122.100
          - end: 192.168.122.200
            start: 192.168.122.150
          cidr: 192.168.122.0/24
          gateway: 192.168.122.1
      - name: internalapi
        dnsDomain: internalapi.example.com
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 172.17.0.250
            start: 172.17.0.100
          excludeAddresses:
          - 172.17.0.10
          - 172.17.0.12
          cidr: 172.17.0.0/24
          vlan: 20
      - name: external
        dnsDomain: external.example.com
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 10.0.0.250
            start: 10.0.0.100
          cidr: 10.0.0.0/24
          gateway: 10.0.0.1
      - name: storage
        dnsDomain: storage.example.com
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 172.18.0.250
            start: 172.18.0.100
          cidr: 172.18.0.0/24
          vlan: 21
      - name: tenant
        dnsDomain: tenant.example.com
        subnets:
        - name: subnet1
          allocationRanges:
          - end: 172.19.0.250
            start: 172.19.0.100
          cidr: 172.19.0.0/24
          vlan: 22
    • spec.networks.name: The name of the network, for example, ctlplane.
    • spec.networks.subnets: The IPv4 subnet specification.
    • spec.networks.subnets.name: The name of the subnet, for example, subnet1.
    • spec.networks.subnets.allocationRanges: The NetConfig allocationRange. The allocationRange must not overlap with the MetalLB IPAddressPool range and the IP address pool range.
    • spec.networks.subnets.excludeAddresses: Optional: List of IP addresses from the allocation range that must not be used by data plane nodes.
    • spec.networks.subnets.vlan: The network VLAN. For information about the default RHOSO networks, see Networks for Red Hat OpenStack Services on OpenShift.
  4. Save the openstack_netconfig.yaml definition file.
  5. Create the data plane network:

    $ oc create -f openstack_netconfig.yaml -n openstack
  6. To verify that the data plane network is created, view the openstacknetconfig resource:

    $ oc get netconfig/openstacknetconfig -n openstack

    If you see errors, check the underlying network-attach-definition and node network configuration policies:

    $ oc get network-attachment-definitions -n openstack
    $ oc get nncp

3.4. Additional resources

Chapter 4. Creating the control plane

The Red Hat OpenStack Services on OpenShift (RHOSO) control plane contains the RHOSO services that manage the cloud. The RHOSO services run as a Red Hat OpenShift Container Platform (RHOCP) workload.

Note

Creating the control plane also creates an OpenStackClient pod that you can access through a remote shell (rsh) to run OpenStack CLI commands.

4.1. Prerequisites

  • The OpenStack Operator (openstack-operator) is installed. For more information, see Installing and preparing the OpenStack Operator.
  • The RHOCP cluster is prepared for RHOSO networks. For more information, see Preparing RHOCP for RHOSO networks.
  • The RHOCP cluster is not configured with any network policies that prevent communication between the openstack-operators namespace and the control plane namespace (default openstack). Use the following command to check the existing network policies on the cluster:

    $ oc get networkpolicy -n openstack

    This command returns the message "No resources found in openstack namespace" when there are no network policies. If this command returns a list of network policies, then check that they do not prevent communication between the openstack-operators namespace and the control plane namespace. For more information about network policies, see Network security in the RHOCP Networking guide.

  • You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges.

4.2. Creating the control plane

You must define an OpenStackControlPlane custom resource (CR) to create the control plane and enable the Red Hat OpenStack Services on OpenShift (RHOSO) services.

The following procedure creates an initial control plane with the recommended configurations for each service. The procedure helps you quickly create an operating control plane environment that you can use to troubleshoot issues and test the environment before adding all the customizations you require. You can add service customizations to a deployed environment. For more information about how to customize your control plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.

For an example OpenStackControlPlane CR, see Example OpenStackControlPlane CR.

Tip

Use the following commands to view the OpenStackControlPlane CRD definition and specification schema:

$ oc describe crd openstackcontrolplane

$ oc explain openstackcontrolplane.spec

Procedure

  1. Create a file on your workstation named openstack_control_plane.yaml to define the OpenStackControlPlane CR:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack-control-plane
      namespace: openstack
  2. Specify the Secret CR you created to provide secure access to the RHOSO service pods in Providing secure access to the Red Hat OpenStack Services on OpenShift services:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack-control-plane
      namespace: openstack
    spec:
      secret: osp-secret
  3. Specify the storageClass you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end:

    spec:
      secret: osp-secret
      storageClass: <RHOCP_storage_class>
    • Replace <RHOCP_storage_class> with the storage class you created for your RHOCP cluster storage back end. For information about storage classes, see Creating a storage class.
  4. Add the global RabbitMQ settings messagingBus and notificationsBus to specify the default RabbitMQ cluster for RHOSO services:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      messagingBus:
        cluster: <rabbitmq-cluster>
    
      notificationsBus:
        cluster: <rabbitmq-cluster>
    • Replace <rabbitmq-cluster> with the default RabbitMQ cluster that all the RHOSO services use, in this example, rabbitmq. You can customize the RabbitMQ interface for OpenStack services. For more information, see Understand the RabbitMQ interface for OpenStack services in the Monitoring high availability services guide.
  5. Add the following service configurations:

    Note
    • The following service examples use IP addresses from the default RHOSO MetalLB IPAddressPool range for the loadBalancerIPs field. Update the loadBalancerIPs field with the IP address from the MetalLB IPAddressPool range that you created.
    • You cannot override the default public service endpoint. The public service endpoints are exposed as RHOCP routes by default, because only routes are supported for public endpoints.
    • Block Storage service (cinder):

        cinder:
          apiOverride:
            route: {}
          template:
            databaseInstance: openstack
            secret: osp-secret
            cinderAPI:
              replicas: 3
              override:
                service:
                  internal:
                    metadata:
                      annotations:
                        metallb.universe.tf/address-pool: internalapi
                        metallb.universe.tf/allow-shared-ip: internalapi
                        metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                    spec:
                      type: LoadBalancer
            cinderScheduler:
              replicas: 1
            cinderBackup:
              networkAttachments:
              - storage
              replicas: 0
            cinderVolumes:
              volume1:
                networkAttachments:
                - storage
                replicas: 0
      • cinderBackup.replicas: You can deploy the initial control plane without activating the cinderBackup service. To deploy the service, you must set the number of replicas for the service and configure the back end for the service. For information about the recommended replicas for each service and how to configure a back end for the Block Storage service and the backup service, see Configuring the Block Storage backup service in Configuring persistent storage.
      • cinderVolumes.replicas: You can deploy the initial control plane without activating the cinderVolumes service. To deploy the service, you must set the number of replicas for the service and configure the back end for the service. For information about the recommended replicas for the cinderVolumes service and how to configure a back end for the service, see Configuring the Block Storage volume service component in Configuring persistent storage.
    • Compute service (nova):

        nova:
          apiOverride:
            route: {}
          template:
            apiServiceTemplate:
              replicas: 3
              override:
                service:
                  internal:
                    metadata:
                      annotations:
                        metallb.universe.tf/address-pool: internalapi
                        metallb.universe.tf/allow-shared-ip: internalapi
                        metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                    spec:
                      type: LoadBalancer
            metadataServiceTemplate:
              replicas: 3
              override:
                service:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
            schedulerServiceTemplate:
              replicas: 3
            cellTemplates:
              cell0:
                cellDatabaseAccount: nova-cell0
                cellDatabaseInstance: openstack
                messagingBus:
                  cluster: rabbitmq
                hasAPIAccess: true
              cell1:
                cellDatabaseAccount: nova-cell1
                cellDatabaseInstance: openstack-cell1
                messagingBus:
                  cluster: rabbitmq-cell1
                noVNCProxyServiceTemplate:
                  enabled: true
                  networkAttachments:
                  - ctlplane
                hasAPIAccess: true
            secret: osp-secret
      Note

      A full set of Compute services (nova) are deployed by default for each of the default cells, cell0 and cell1: nova-api, nova-metadata, nova-scheduler, and nova-conductor. The novncproxy service is also enabled for cell1 by default.

    • DNS service for the data plane:

        dns:
          template:
            options:
            - key: server
              values:
              - <IP address for DNS server reachable from dnsmasq pod>
            override:
              service:
                metadata:
                  annotations:
                    metallb.universe.tf/address-pool: ctlplane
                    metallb.universe.tf/allow-shared-ip: ctlplane
                    metallb.universe.tf/loadBalancerIPs: 192.168.122.80
                spec:
                  type: LoadBalancer
            replicas: 2
      • options: Defines the dnsmasq instances required for each DNS server by using key-value pairs. In this example, there is one key-value pair defined because there is only one DNS server configured to forward requests to.
      • key: Specifies the dnsmasq parameter to customize for the deployed dnsmasq instance. Set to one of the following valid values:

        • server
        • rev-server
        • srv-host
        • txt-record
        • ptr-record
        • rebind-domain-ok
        • naptr-record
        • cname
        • host-record
        • caa-record
        • dns-rr
        • auth-zone
        • synth-domain
        • no-negcache
        • local
      • values: Specifies the value for the DNS server reachable from the dnsmasq pod on the RHOCP cluster network. You can specify a generic DNS server as the value, for example, 1.1.1.1, or a DNS server for a specific domain, for example, /google.com/8.8.8.8.

        Note

        This DNS service, dnsmasq, provides DNS services for nodes on the RHOSO data plane. dnsmasq is different from the RHOSO DNS service (designate) that provides DNS as a service for cloud tenants.

    • Identity service (keystone)

        keystone:
          apiOverride:
            route: {}
          template:
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
            databaseInstance: openstack
            secret: osp-secret
            replicas: 3
    • Image service (glance):

        glance:
          apiOverrides:
            default:
              route: {}
          template:
            databaseInstance: openstack
            storage:
              storageRequest: 10G
            secret: osp-secret
            keystoneEndpoint: default
            glanceAPIs:
              default:
                replicas: 0 # Configure back end; set to 3 when deploying service
                override:
                  service:
                    internal:
                      metadata:
                        annotations:
                          metallb.universe.tf/address-pool: internalapi
                          metallb.universe.tf/allow-shared-ip: internalapi
                          metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                      spec:
                        type: LoadBalancer
                networkAttachments:
                - storage
      • glanceAPIs.default.replicas: You can deploy the initial control plane without activating the Image service (glance). To deploy the Image service, you must set the number of replicas for the service and configure the back end for the service. For information about the recommended replicas for the Image service and how to configure a back end for the service, see Configuring the Image service (glance) in Configuring persistent storage. If you do not deploy the Image service, you cannot upload images to the cloud or start an instance.
    • Key Management service (barbican):

        barbican:
          apiOverride:
            route: {}
          template:
            databaseInstance: openstack
            secret: osp-secret
            barbicanAPI:
              replicas: 3
              override:
                service:
                  internal:
                    metadata:
                      annotations:
                        metallb.universe.tf/address-pool: internalapi
                        metallb.universe.tf/allow-shared-ip: internalapi
                        metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                    spec:
                      type: LoadBalancer
            barbicanWorker:
              replicas: 3
            barbicanKeystoneListener:
              replicas: 1
    • Networking service (neutron):

        neutron:
          apiOverride:
            route: {}
          template:
            replicas: 3
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
            databaseInstance: openstack
            secret: osp-secret
            networkAttachments:
            - internalapi
    • Object Storage service (swift):

        swift:
          enabled: true
          proxyOverride:
            route: {}
          template:
            swiftProxy:
              networkAttachments:
              - storage
              override:
                service:
                  internal:
                    metadata:
                      annotations:
                        metallb.universe.tf/address-pool: internalapi
                        metallb.universe.tf/allow-shared-ip: internalapi
                        metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                    spec:
                      type: LoadBalancer
              replicas: 2
              secret: osp-secret
            swiftRing:
              ringReplicas: 3
            swiftStorage:
              networkAttachments:
              - storage
              replicas: 3
              storageRequest: 10Gi
    • Optimize service (watcher):

        watcher:
          enabled: true
    • OVN:

        ovn:
          template:
            ovnDBCluster:
              ovndbcluster-nb:
                replicas: 3
                dbType: NB
                storageRequest: 10G
                networkAttachment: internalapi
              ovndbcluster-sb:
                replicas: 3
                dbType: SB
                storageRequest: 10G
                networkAttachment: internalapi
            ovnNorthd: {}
    • Placement service (placement):

        placement:
          apiOverride:
            route: {}
          template:
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
            databaseInstance: openstack
            replicas: 3
            secret: osp-secret
    • Telemetry service (ceilometer, prometheus):

        telemetry:
          enabled: true
          template:
            metricStorage:
              enabled: true
              dashboardsEnabled: true
              dataplaneNetwork: ctlplane
              networkAttachments:
                - ctlplane
              monitoringStack:
                alertingEnabled: true
                scrapeInterval: 30s
                storage:
                  strategy: persistent
                  retention: 24h
                  persistent:
                    pvcStorageRequest: 20G
            autoscaling:
              enabled: false
              aodh:
                databaseAccount: aodh
                databaseInstance: openstack
                passwordSelector:
                  aodhService: AodhPassword
                serviceUser: aodh
                secret: osp-secret
              heatInstance: heat
            ceilometer:
              enabled: true
              secret: osp-secret
            logging:
              enabled: false
      • telemetry.template.metricStorage.dataplaneNetwork: Defines the network that you use to scrape dataplane node_exporter endpoints.
      • telemetry.template.metricStorage.networkAttachments: Lists the networks that each service pod is attached to by using the NetworkAttachmentDefinition resource names. You configure a NIC for the service for each network attachment that you specify. If you do not configure the isolated networks that each service pod is attached to, then the default pod network is used. You must create a networkAttachment that matches the network that you specify as the dataplaneNetwork, so that Prometheus can scrape data from the dataplane nodes.
      • telemetry.template.autoscaling: You must have the autoscaling field present, even if autoscaling is disabled. For more information about autoscaling, see Autoscaling for Instances.
  6. Add the following service configurations to implement high availability (HA):

    • A MariaDB Galera cluster for use by all RHOSO services (openstack), and a MariaDB Galera cluster for use by the Compute service for cell1 (openstack-cell1):

        galera:
          templates:
            openstack:
              storageRequest: 5000M
              secret: osp-secret
              replicas: 3
            openstack-cell1:
              storageRequest: 5000M
              secret: osp-secret
              replicas: 3
    • A single memcached cluster that contains three memcached servers:

        memcached:
          templates:
            memcached:
               replicas: 3
    • A RabbitMQ cluster for use by all RHOSO services (rabbitmq), and a RabbitMQ cluster for use by the Compute service for cell1 (rabbitmq-cell1):

        rabbitmq:
          templates:
            rabbitmq:
              persistence:
                storage: <rabbitmq_cluster_storage>
              replicas: 3
              override:
                service:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.85
                  spec:
                    type: LoadBalancer
            rabbitmq-cell1:
              persistence:
                storage: <rabbitmq_cluster_storage>
              replicas: 3
              override:
                service:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.86
                  spec:
                    type: LoadBalancer
    • Replace <rabbitmq_cluster_storage> with sufficient storage for each RabbitMQ cluster, for example, 10Gi.

      Note

      You cannot configure multiple RabbitMQ instances on the same virtual IP (VIP) address because all RabbitMQ instances use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses.

      By default, RabbitMQ uses Quorum queues to provide increased data safety and high availability at the expense of a slight increase in latency.

      Warning

      You must not configure the RabbitMQ clusters of an existing RHOSO deployment to use Quorum queues! If you do so then your existing RHOSO services will not start or work properly.

      The RabbitMQ Quorum queue is a durable replicated queue based on the Raft consensus algorithm.

      Note

      The RabbitMQ Quorum queues must be saved to disk to ensure their durability. Quorum queues can occupy significant disk space. Depending on the workload and settings the time taken to free up space after messages are consumed or expire can vary substantially. Ensure that sufficient storage is assigned to each RabbitMQ cluster.

      Improved RabbitMQ failover behavior

      The RabbitMQ service can be configured to provide an improved failover behavior that bypasses the default availability checks of OpenShift, which can wait up to five minutes to declare a service dead. By implementing this improved failover behavior, each OpenStack service checks to see if a RabbitMQ pod is alive and moves to the next pod if it is not.

      Note

      To implement this improved failover behavior for a RabbitMQ cluster, you must reserve three free IP addresses from the default RHOSO MetalLB IPAddressPool range already used for RabbitMQ.

      For example, if you want to improve the failover behavior of the rabbitmq-cell1 RabbitMQ cluster add the following lines to the end of the rabbitmq.templates.rabbitmq-cell1 section of the OpenStackControlPlane CR:

          podOverride:
            services:
              - metadata:
                  annotations:
                    metallb.universe.tf/address-pool: "internalapi"
                    metallb.universe.tf/loadBalancerIPs: "172.17.0.87"
                spec:
                  type: LoadBalancer
              - metadata:
                  annotations:
                    metallb.universe.tf/address-pool: "internalapi"
                    metallb.universe.tf/loadBalancerIPs: "172.17.0.88"
                spec:
                  type: LoadBalancer  !
              - metadata:
                  annotations:
                    metallb.universe.tf/address-pool: "internalapi"
                    metallb.universe.tf/loadBalancerIPs: "172.17.0.89"
                spec:
                  type: LoadBalancer
  7. Create the control plane:

    $ oc create -f openstack_control_plane.yaml -n openstack
  8. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    NAME 						STATUS 	MESSAGE
    openstack-control-plane 	Unknown 	Setup started

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

    Note

    Creating the control plane also creates an OpenStackClient pod that you can access through a remote shell (rsh) to run OpenStack CLI commands.

    $ oc rsh -n openstack openstackclient
  9. Optional: Confirm that the control plane is deployed by reviewing the pods in the openstack namespace:

    $ oc get pods -n openstack

    The control plane is deployed when all the pods are either completed or running.

Verification

  1. Open a remote shell connection to the OpenStackClient pod:

    $ oc rsh -n openstack openstackclient
  2. Confirm that the internal service endpoints are registered with each service:

    $ openstack endpoint list -c 'Service Name' -c Interface -c URL --service glance
    +--------------+-----------+---------------------------------------------------------------+
    | Service Name | Interface | URL                                                           |
    +--------------+-----------+---------------------------------------------------------------+
    | glance       | internal  | https://glance-internal.openstack.svc                     |
    | glance       | public    | https://glance-default-public-openstack.apps.ostest.test.metalkube.org |
    +--------------+-----------+---------------------------------------------------------------+
  3. Exit the OpenStackClient pod:

    $ exit

4.3. Example OpenStackControlPlane CR

The following example OpenStackControlPlane CR is a complete control plane configuration that includes all the key services that must always be enabled for a successful deployment.

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack-control-plane
  namespace: openstack
spec:
  messagingBus:
    cluster: rabbitmq

  notificationsBus:
    cluster: rabbitmq

  secret: osp-secret
  storageClass: your-RHOCP-storage-class
  cinder:
    apiOverride:
      route: {}
    template:
      databaseInstance: openstack
      secret: osp-secret
      cinderAPI:
        replicas: 3
        override:
          service:
            internal:
              metadata:
                annotations:
                  metallb.universe.tf/address-pool: internalapi
                  metallb.universe.tf/allow-shared-ip: internalapi
                  metallb.universe.tf/loadBalancerIPs: 172.17.0.80
              spec:
                type: LoadBalancer
      cinderScheduler:
        replicas: 1
      cinderBackup:
        networkAttachments:
        - storage
        replicas: 0 # backend needs to be configured to activate the service
      cinderVolumes:
        volume1:
          networkAttachments:
          - storage
          replicas: 0 # backend needs to be configured to activate the service
  nova:
    apiOverride:
      route: {}
    template:
      apiServiceTemplate:
        replicas: 3
        override:
          service:
            internal:
              metadata:
                annotations:
                  metallb.universe.tf/address-pool: internalapi
                  metallb.universe.tf/allow-shared-ip: internalapi
                  metallb.universe.tf/loadBalancerIPs: 172.17.0.80
              spec:
                type: LoadBalancer
      metadataServiceTemplate:
        replicas: 3
        override:
          service:
            metadata:
              annotations:
                metallb.universe.tf/address-pool: internalapi
                metallb.universe.tf/allow-shared-ip: internalapi
                metallb.universe.tf/loadBalancerIPs: 172.17.0.80
            spec:
              type: LoadBalancer
      schedulerServiceTemplate:
        replicas: 3
      cellTemplates:
        cell0:
          cellDatabaseAccount: nova-cell0
          cellDatabaseInstance: openstack
          messagingBus:
            cluster: rabbitmq
          hasAPIAccess: true
        cell1:
          cellDatabaseAccount: nova-cell1
          cellDatabaseInstance: openstack-cell1
          messagingBus:
            cluster: rabbitmq-cell1
          noVNCProxyServiceTemplate:
            enabled: true
            networkAttachments:
            - ctlplane
          hasAPIAccess: true
      secret: osp-secret
  dns:
    template:
      options:
      - key: server
        values:
        - 192.168.122.1
      - key: server
        values:
        - 192.168.122.2
      override:
        service:
          metadata:
            annotations:
              metallb.universe.tf/address-pool: ctlplane
              metallb.universe.tf/allow-shared-ip: ctlplane
              metallb.universe.tf/loadBalancerIPs: 192.168.122.80
          spec:
            type: LoadBalancer
      replicas: 2
  galera:
    templates:
      openstack:
        storageRequest: 5000M
        secret: osp-secret
        replicas: 3
      openstack-cell1:
        storageRequest: 5000M
        secret: osp-secret
        replicas: 3
  keystone:
    apiOverride:
      route: {}
    template:
      override:
        service:
          internal:
            metadata:
              annotations:
                metallb.universe.tf/address-pool: internalapi
                metallb.universe.tf/allow-shared-ip: internalapi
                metallb.universe.tf/loadBalancerIPs: 172.17.0.80
            spec:
              type: LoadBalancer
      databaseInstance: openstack
      secret: osp-secret
      replicas: 3
  glance:
    apiOverrides:
      default:
        route: {}
    template:
      databaseInstance: openstack
      storage:
        storageRequest: 10G
      secret: osp-secret
      keystoneEndpoint: default
      glanceAPIs:
        default:
          replicas: 0 # Configure back end; set to 3 when deploying service
          override:
            service:
              internal:
                metadata:
                  annotations:
                    metallb.universe.tf/address-pool: internalapi
                    metallb.universe.tf/allow-shared-ip: internalapi
                    metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                spec:
                  type: LoadBalancer
          networkAttachments:
          - storage
  barbican:
    apiOverride:
      route: {}
    template:
      databaseInstance: openstack
      secret: osp-secret
      barbicanAPI:
        replicas: 3
        override:
          service:
            internal:
              metadata:
                annotations:
                  metallb.universe.tf/address-pool: internalapi
                  metallb.universe.tf/allow-shared-ip: internalapi
                  metallb.universe.tf/loadBalancerIPs: 172.17.0.80
              spec:
                type: LoadBalancer
      barbicanWorker:
        replicas: 3
      barbicanKeystoneListener:
        replicas: 1
  memcached:
    templates:
      memcached:
         replicas: 3
  neutron:
    apiOverride:
      route: {}
    template:
      replicas: 3
      override:
        service:
          internal:
            metadata:
              annotations:
                metallb.universe.tf/address-pool: internalapi
                metallb.universe.tf/allow-shared-ip: internalapi
                metallb.universe.tf/loadBalancerIPs: 172.17.0.80
            spec:
              type: LoadBalancer
      databaseInstance: openstack
      secret: osp-secret
      networkAttachments:
      - internalapi
  swift:
    enabled: true
    proxyOverride:
      route: {}
    template:
      swiftProxy:
        networkAttachments:
        - storage
        override:
          service:
            internal:
              metadata:
                annotations:
                  metallb.universe.tf/address-pool: internalapi
                  metallb.universe.tf/allow-shared-ip: internalapi
                  metallb.universe.tf/loadBalancerIPs: 172.17.0.80
              spec:
                type: LoadBalancer
        replicas: 2
      swiftRing:
        ringReplicas: 3
      swiftStorage:
        networkAttachments:
        - storage
        replicas: 3
        storageRequest: 100Gi
  ovn:
    template:
      ovnDBCluster:
        ovndbcluster-nb:
          replicas: 3
          dbType: NB
          storageRequest: 10G
          networkAttachment: internalapi
        ovndbcluster-sb:
          replicas: 3
          dbType: SB
          storageRequest: 10G
          networkAttachment: internalapi
      ovnNorthd: {}
      ovnController:
        networkAttachment: tenant
        nicMappings:
          my-network: nic1
  placement:
    apiOverride:
      route: {}
    template:
      override:
        service:
          internal:
            metadata:
              annotations:
                metallb.universe.tf/address-pool: internalapi
                metallb.universe.tf/allow-shared-ip: internalapi
                metallb.universe.tf/loadBalancerIPs: 172.17.0.80
            spec:
              type: LoadBalancer
      databaseInstance: openstack
      replicas: 3
      secret: osp-secret
  rabbitmq:
    templates:
      rabbitmq:
        persistence:
          storage: 10Gi
        replicas: 3
        override:
          service:
            metadata:
              annotations:
                metallb.universe.tf/address-pool: internalapi
                metallb.universe.tf/loadBalancerIPs: 172.17.0.85
            spec:
              type: LoadBalancer
      rabbitmq-cell1:
        persistence:
          storage: 10Gi
        replicas: 3
        override:
          service:
            metadata:
              annotations:
                metallb.universe.tf/address-pool: internalapi
                metallb.universe.tf/loadBalancerIPs: 172.17.0.86
            spec:
              type: LoadBalancer
  telemetry:
    enabled: true
    template:
      metricStorage:
        enabled: true
        dashboardsEnabled: true
        dataplaneNetwork: ctlplane
        networkAttachments:
          - ctlplane
        monitoringStack:
          alertingEnabled: true
          scrapeInterval: 30s
          storage:
            strategy: persistent
            retention: 24h
            persistent:
              pvcStorageRequest: 20G
      autoscaling:
        enabled: false
        aodh:
          databaseAccount: aodh
          databaseInstance: openstack
          passwordSelector:
            aodhService: AodhPassword
          serviceUser: aodh
          secret: osp-secret
        heatInstance: heat
      ceilometer:
        enabled: true
        secret: osp-secret
      logging:
        enabled: false
  • spec.storageClass: The storage class that you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end.
  • spec.cinder: Service-specific parameters for the Block Storage service (cinder).
  • spec.cinder.template.cinderBackup: The Block Storage service back end. For more information on configuring storage services, see the Configuring persistent storage guide.
  • spec.cinder.template.cinderVolumes: The Block Storage service configuration. For more information on configuring storage services, see the Configuring persistent storage guide.
  • spec.cinder.template.cinderVolumes.networkAttachments: The list of networks that each service pod is directly attached to, specified by using the NetworkAttachmentDefinition resource names. A NIC is configured for the service for each specified network attachment.

    Note

    If you do not configure the isolated networks that each service pod is attached to, then the default pod network is used. For example, the Block Storage service uses the storage network to connect to a storage back end; the Identity service (keystone) uses an LDAP or Active Directory (AD) network; the ovnDBCluster service uses the internalapi network; and the ovnController service uses the tenant network.

  • spec.nova: Service-specific parameters for the Compute service (nova).
  • spec.nova.apiOverride: Service API route definition. You can customize the service route by using route-specific annotations. For more information, see Route-specific annotations in the RHOCP Networking guide. Set route: to {} to apply the default route template.
  • nicMappings: Pairs the physical network your gateway is on with the NIC that connects to the gateway network. This physical network is set in the neutron network provider:*name field. You can, optionally, add more <network_name>:<nic_name> pairs as required.
  • metallb.universe.tf/address-pool: The internal service API endpoint registered as a MetalLB service with the IPAddressPool internalapi.
  • metallb.universe.tf/loadBalancerIPs: The virtual IP (VIP) address for the service. The IP is shared with other services by default.
  • spec.rabbitmq: The RabbitMQ instances exposed to an isolated network with distinct IP addresses defined in the loadBalancerIPs annotation, as indicated in 11 and 12.

    Note

    You cannot configure multiple RabbitMQ instances on the same virtual IP (VIP) address because all RabbitMQ instances use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses.

  • rabbitmq.override.service.metadata.annotations.metallb.universe.tf/loadBalancerIPs: The distinct IP address for a RabbitMQ instance that is exposed to an isolated network.

4.4. Removing a service from the control plane

You can completely remove a service and the service database from the control plane after deployment by disabling the service. Many services are enabled by default, which means that the OpenStack Operator creates resources such as the service database and Identity service (keystone) users, even if no service pod is created because replicas is set to 0.

Warning

Remove a service with caution. Removing a service is not the same as stopping service pods. Removing a service is irreversible. Disabling a service removes the service database and any resources that referenced the service are no longer tracked. Create a backup of the service database before removing a service.

Procedure

  1. Open the OpenStackControlPlane CR file on your workstation.
  2. Locate the service you want to remove from the control plane and disable it:

      cinder:
        enabled: false
        apiOverride:
          route: {}
          ...
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  4. Wait until RHOCP removes the resource related to the disabled service. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    NAME 							STATUS 	MESSAGE
    openstack-control-plane 	Unknown 	Setup started

    The OpenStackControlPlane resource is updated with the disabled service when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

  5. Optional: Confirm that the pods from the disabled service are no longer listed by reviewing the pods in the openstack namespace:

    $ oc get pods -n openstack
  6. Check that the service is removed:

    $ oc get cinder -n openstack

    This command returns the following message when the service is successfully removed:

    No resources found in openstack namespace.
  7. Check that the API endpoints for the service are removed from the Identity service (keystone):

    $ oc rsh -n openstack openstackclient
    $ openstack endpoint list --service volumev3

    This command returns the following message when the API endpoints for the service are successfully removed:

    No service with a type, name or ID of 'volumev3' exists.

4.5. Additional resources

Chapter 5. Creating the data plane

The Red Hat OpenStack Services on OpenShift (RHOSO) data plane consists of RHEL 9.4 or 9.6 nodes. Use the OpenStackDataPlaneNodeSet custom resource definition (CRD) to create the custom resources (CRs) that define the nodes and the layout of the data plane. An OpenStackDataPlaneNodeSet CR is a logical grouping of nodes of a similar type. A data plane typically consists of multiple OpenStackDataPlaneNodeSet CRs to define groups of nodes with different configurations and roles.

You can use pre-provisioned or unprovisioned nodes in an OpenStackDataPlaneNodeSet CR:

  • Pre-provisioned node: You have used your own tooling to install the operating system on the node before adding it to the data plane.
  • Unprovisioned node: The node does not have an operating system installed before you add it to the data plane. The node is provisioned by using the Cluster Baremetal Operator (CBO) as part of the data plane creation and deployment process.
Note

You cannot include both pre-provisioned and unprovisioned nodes in the same OpenStackDataPlaneNodeSet CR.

To create and deploy a data plane, you must perform the following tasks:

  1. Create a Secret CR for each node set for Ansible to use to execute commands on the data plane nodes.
  2. Create the OpenStackDataPlaneNodeSet CRs that define the nodes and layout of the data plane.
  3. Create the OpenStackDataPlaneDeployment CR that triggers the Ansible execution that deploys and configures the software for the specified list of OpenStackDataPlaneNodeSet CRs.

The following procedures create two simple node sets, one with pre-provisioned nodes, and one with bare-metal nodes that must be provisioned during the node set deployment. The procedures aim to get you up and running quickly with a data plane environment that you can use to troubleshoot issues and test the environment before adding all the customizations you require. You can add additional node sets to a deployed environment, and you can customize your deployed environment by updating the common configuration in the default ConfigMap CR for the service, and by creating custom services. For more information about how to customize your data plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.

Note

Red Hat OpenStack Services on OpenShift (RHOSO) supports external deployments of Red Hat Ceph Storage 7, 8, and 9. Configuration examples that reference Red Hat Ceph Storage use Release 7 information. If you are using a later version of Red Hat Ceph Storage, adjust the configuration examples accordingly.

5.1. Prerequisites

  • A functional control plane, created with the OpenStack Operator. For more information, see Creating the control plane.
  • You are logged on to a workstation that has access to the Red Hat OpenShift Container Platform (RHOCP) cluster as a user with cluster-admin privileges.

5.2. Creating the data plane secrets

You must create the Secret custom resources (CRs) that the data plane requires to be able to operate. The Secret CRs are used by the data plane nodes to secure access between nodes, to register the node operating systems with the Red Hat Customer Portal, to enable node repositories, and to provide Compute nodes with access to libvirt.

To enable secure access between nodes, you must generate two SSH keys and create an SSH key Secret CR for each key:

  • An SSH key to enable Ansible to manage the RHEL nodes on the data plane. Ansible executes commands with this user and key. You can create an SSH key for each OpenStackDataPlaneNodeSet CR in your data plane.

    • An SSH key to enable migration of instances between Compute nodes.

Prerequisites

  • Pre-provisioned nodes are configured with an SSH public key in the $HOME/.ssh/authorized_keys file for a user with passwordless sudo privileges. For more information, see Managing sudo access in the RHEL Configuring basic system settings guide.

Procedure

  1. For unprovisioned nodes, create the SSH key pair for Ansible:

    $ ssh-keygen -f <key_file_name> -N "" -t rsa -b 4096
    • Replace <key_file_name> with the name to use for the key pair.
  2. Create the Secret CR for Ansible and apply it to the cluster:

    $ oc create secret generic dataplane-ansible-ssh-private-key-secret \
    --save-config \
    --dry-run=client \
    --from-file=ssh-privatekey=<key_file_name> \
    --from-file=ssh-publickey=<key_file_name>.pub \
    [--from-file=authorized_keys=<key_file_name>.pub] -n openstack \
    -o yaml | oc apply -f -
    • Replace <key_file_name> with the name and location of your SSH key pair file.
    • Optional: Only include the --from-file=authorized_keys option for bare-metal nodes that must be provisioned when creating the data plane.
  3. If you are creating Compute nodes, create a secret for migration.

    1. Create the SSH key pair for instance migration:

      $ ssh-keygen -f ./nova-migration-ssh-key -t ecdsa-sha2-nistp521 -N ''
    2. Create the Secret CR for migration and apply it to the cluster:

      $ oc create secret generic nova-migration-ssh-key \
      --save-config \
      --from-file=ssh-privatekey=nova-migration-ssh-key \
      --from-file=ssh-publickey=nova-migration-ssh-key.pub \
      -n openstack \
      -o yaml | oc apply -f -
  4. For nodes that have not been registered to the Red Hat Customer Portal, create the Secret CR for subscription-manager credentials to register the nodes:

    $ oc create secret generic subscription-manager \
    --from-literal rhc_auth='{"login": {"username": "<subscription_manager_username>", "password": "<subscription_manager_password>"}}'
    • Replace <subscription_manager_username> with the username you set for subscription-manager.
    • Replace <subscription_manager_password> with the password you set for subscription-manager.
  5. Create a Secret CR that contains the Red Hat registry credentials:

    $ oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"<username>": "<password>"}}'
    • Replace <username> and <password> with your Red Hat registry username and password credentials.

      For information about how to create your registry service account, see the Knowledge Base article Creating Registry Service Accounts.

  6. If you are creating Compute nodes, create a secret for libvirt.

    1. Create a file on your workstation named secret_libvirt.yaml to define the libvirt secret:

      apiVersion: v1
      kind: Secret
      metadata:
       name: libvirt-secret
       namespace: openstack
      type: Opaque
      data:
       LibvirtPassword: <base64_password>
      • Replace <base64_password> with a base64-encoded string with maximum length 63 characters. You can use the following command to generate a base64-encoded password:

        $ echo -n <password> | base64
        Tip

        If you do not want to base64-encode the username and password, you can use the stringData field instead of the data field to set the username and password.

    2. Create the Secret CR:

      $ oc apply -f secret_libvirt.yaml -n openstack
  7. Verify that the Secret CRs are created:

    $ oc describe secret dataplane-ansible-ssh-private-key-secret
    $ oc describe secret nova-migration-ssh-key
    $ oc describe secret subscription-manager
    $ oc describe secret redhat-registry
    $ oc describe secret libvirt-secret

You can define an OpenStackDataPlaneNodeSet custom resource (CR) for each logical grouping of pre-provisioned nodes in your data plane, for example, nodes grouped by hardware, location, or networking. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR.

Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1. If you customize your control plane to include additional Compute cells, you must specify the cell to which the node set is connected. For more information on adding Compute cells, see Connecting an OpenStackDataPlaneNodeSet CR to a Compute cell in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.

You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.

Tip

For an example OpenStackDataPlaneNodeSet CR that a node set from pre-provisioned Compute nodes, see Example OpenStackDataPlaneNodeSet CR for pre-provisioned nodes.

Prerequisites

  • The control plane network and IP address are configured for the pre-provisioned data plane nodes. For more information, see Creating the data plane network.

Procedure

  1. Create a file on your workstation named openstack_preprovisioned_node_set.yaml to define the OpenStackDataPlaneNodeSet CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: openstack-data-plane
      namespace: openstack
    spec:
      env:
        - name: ANSIBLE_FORCE_COLOR
          value: "True"
    • metadata.name: The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), and start and end with an alphanumeric character. Update the name in this example to a name that reflects the nodes in the set.

      Important

      The OpenStackDataPlaneNodeSet CR name you provide should be a maximum of 28 characters. Although 63 characters are allotted for the CR name, this character count includes system identifiers that are appended to the name after it is saved. Additional characters above the 63 character limit are truncated when the CR is saved and this causes errors in system processes.

    • spec.env: An optional list of environment variables to pass to the pod.
  2. Connect the data plane to the control plane network:

    spec:
      ...
      networkAttachments:
        - ctlplane
  3. Specify that the nodes in this set are pre-provisioned:

      preProvisioned: true
  4. Add the SSH key secret that you created to enable Ansible to connect to the data plane nodes:

      nodeTemplate:
        ansibleSSHPrivateKeySecret: <secret-key>
    • Replace <secret-key> with the name of the SSH key Secret CR you created for this node set in Creating the data plane secrets, for example, dataplane-ansible-ssh-private-key-secret.
  5. Create a Persistent Volume Claim (PVC) in the openstack namespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set the volumeMode to Filesystem and accessModes to ReadWriteOnce. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and the ansible-runner creates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment.
  6. Enable persistent logging for the data plane nodes:

      nodeTemplate:
        ...
        extraMounts:
          - extraVolType: Logs
            volumes:
            - name: ansible-logs
              persistentVolumeClaim:
                claimName: <pvc_name>
            mounts:
            - name: ansible-logs
              mountPath: "/runner/artifacts"
    • Replace <pvc_name> with the name of the PVC storage on your RHOCP cluster.
  7. Specify the management network:

      nodeTemplate:
        ...
        managementNetwork: ctlplane
  8. Specify the Secret CRs used to source the usernames and passwords to register the operating system of your nodes and to enable repositories. The following example demonstrates how to register your nodes to Red Hat Content Delivery Network (CDN). For information about how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.

      nodeTemplate:
        ...
        ansible:
          ansibleUser: cloud-admin
          ansiblePort: 22
          ansibleVarsFrom:
            - secretRef:
                name: subscription-manager
            - secretRef:
                name: redhat-registry
          ansibleVars:
            rhc_release: 9.4
            rhc_repositories:
                - {name: "*", state: disabled}
                - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
                - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
                - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
                - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
                - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
                - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
            edpm_bootstrap_release_version_package: []
  9. Add the network configuration template to apply to your data plane nodes. The following example applies a network configuration to the data plane nodes:

      nodeTemplate:
        ...
        ansible:
          ...
          ansibleVars:
            ...
            edpm_network_config_os_net_config_mappings:
              edpm-compute-0:
                nic1: 52:54:04:60:55:22
                nic2: 53:13:10:33:24:61
            neutron_physical_bridge_name: br-ex
            edpm_network_config_nmstate: true
            edpm_network_config_update: false
            edpm_network_config_template: |
              ---
              {% set mtu_list = [ctlplane_mtu] %}
              {% for network in nodeset_networks %}
              {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
              {%- endfor %}
              {% set min_viable_mtu = mtu_list | max %}
              network_config:
              - type: ovs_bridge
                name: {{ neutron_physical_bridge_name }}
                mtu: {{ min_viable_mtu }}
                use_dhcp: false
                dns_servers: {{ ctlplane_dns_nameservers }}
                domain: {{ dns_search_domains }}
                addresses:
                - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
                routes: {{ ctlplane_host_routes }}
                members:
                - type: linux_bond
                  name: bond0
                  mtu: {{ min_viable_mtu }}
                  bonding_options: "mode=802.3ad lacp_rate=fast updelay=1000 miimon=100 xmit_hash_policy=layer3+4"
                  members:
                  - type: interface
                    name: nic1
                    mtu: {{ min_viable_mtu }}
                    primary: true
                  - type: interface
                    name: nic2
                    mtu: {{ min_viable_mtu }}
              {% for network in nodeset_networks %}
                - type: vlan
                  mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
                  vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
                  addresses:
                  - ip_netmask:
                      {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
                  routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
              {% endfor %}
    • nic1 and nic2: The MAC addresses assigned to the NIC to use for network configuration on the Compute node.
    • neutron_physical_bridge_name: The name of the OVS bridge to setup on the Compute node.
    • edpm_network_config_nmstate: Sets the os-net-config provider to nmstate. The default value is true. Change it to false only if a specific limitation of the nmstate provider requires you to use the ifcfg provider now. In a future release after nmstate limitations are resolved, the ifcfg provider will be deprecated and removed. In this RHOSO release, adoption of a RHOSP 17.1 deployment with the nmstate provider is not supported. For this and other limitations of RHOSO nmstate support, see https://issues.redhat.com/browse/OSPRH-11309.
    • edpm_network_config_update: When deploying a node set for the first time, ensure that the edpm_network_config_update variable is set to false. If you later update a edpm_network_config_template, first set edpm_network_config_update to true. After you complete the update, reset it to false.

      Important

      After an edpm_network_config_template update, you must reset edpm_network_config_update to false. Otherwise, the nodes could lose network access. Whenever edpm_network_config_update is true, the updated network configuration is reapplied every time an OpenStackDataPlaneDeployment CR is created that includes the configure-network service that is a member of the servicesOverride list.

    • dns_servers: Autogenerated from IPAM and DNS and no user input is required.
    • domain: Autogenerated from IPAM and DNS and no user input is required.
    • routes: Autogenerated from IPAM and DNS and no user input is required.
  10. Add the common configuration for the set of nodes in this group under the nodeTemplate section. Each node in this OpenStackDataPlaneNodeSet inherits this configuration. For information about the properties you can use to configure common node attributes, see OpenStackDataPlaneNodeSet CR spec properties.
  11. Add the time synchronization configuration to align the node time with a central source. Multiple NTP servers can be defined in this configuration. Time synchronization is provided by the chrony service. The following example creates a time synchronization configuration for all nodes in the nodeset use:

      nodeTemplate:
            ...
        ansible:
            ...
          ansibleVars:
            timesync_ntp_servers:
              - hostname: <ntp_server>
                iburst: <burst_value>
    • Replace <ntp_server> with the hostname or IP address of the NTP server.
    • Replace <burst_value> with either true or false. This configures fast initial synchronization with the NTP server. The default is false.

      Note

      Configure time synchronization at the node level by adding it in the ansibleVars section of the individual node.

  12. Define each node in this node set:

      nodes:
        edpm-compute-0:
          hostName: edpm-compute-0
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.100
          - name: internalapi
            subnetName: subnet1
            fixedIP: 172.17.0.100
          - name: storage
            subnetName: subnet1
            fixedIP: 172.18.0.100
          - name: tenant
            subnetName: subnet1
            fixedIP: 172.19.0.100
          ansible:
            ansibleHost: 192.168.122.100
            ansibleUser: cloud-admin
            ansibleVars:
              fqdn_internal_api: edpm-compute-0.example.com
        edpm-compute-1:
          hostName: edpm-compute-1
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.101
          - name: internalapi
            subnetName: subnet1
            fixedIP: 172.17.0.101
          - name: storage
            subnetName: subnet1
            fixedIP: 172.18.0.101
          - name: tenant
            subnetName: subnet1
            fixedIP: 172.19.0.101
          ansible:
            ansibleHost: 192.168.122.101
            ansibleUser: cloud-admin
            ansibleVars:
              fqdn_internal_api: edpm-compute-1.example.com
    • edpm-compute-0: The node definition reference, for example, edpm-compute-0. Each node in the node set must have a node definition.
    • networks: Defines the IPAM and the DNS records for the node.
    • fixedIP: Specifies a predictable IP address for the network that must be in the allocation range defined for the network in the NetConfig CR.
    • networkData: The Secret that contains network configuration that is node-specific.
    • userData.name: The Secret that contains user data that is node-specific.
    • ansibleHost: Overrides the hostname or IP address that Ansible uses to connect to the node. The default value is the value set for the hostName field for the node or the node definition reference, for example, edpm-compute-0. Set to the same IP address as the IP address configured on the pre-provisioned node.
    • ansibleVars: Node-specific Ansible variables that customize the node.

      Note
      • Nodes defined within the nodes section can configure the same Ansible variables that are configured in the nodeTemplate section. Where an Ansible variable is configured for both a specific node and within the nodeTemplate section, the node-specific values override those from the nodeTemplate section.
      • You do not need to replicate all the nodeTemplate Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node.
      • Many ansibleVars include edpm in the name, which stands for "External Data Plane Management".

        For information about the properties you can use to configure node attributes, see OpenStackDataPlaneNodeSet CR properties.

  13. Save the openstack_preprovisioned_node_set.yaml definition file.
  14. Create the data plane resources:

    $ oc create --save-config -f openstack_preprovisioned_node_set.yaml -n openstack

Verification

  1. Verify that the data plane resources have been created by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m

    When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error.

    For information about the data plane conditions and states, see Data plane conditions and states.

  2. Verify that the Secret resource was created for the node set:

    $ oc get secret | grep openstack-data-plane
    dataplanenodeset-openstack-data-plane Opaque 1 3m50s
  3. Verify the services were created:

    $ oc get openstackdataplaneservice -n openstack
    NAME                AGE
    bootstrap           46m
    ceph-client         46m
    ceph-hci-pre        46m
    configure-network   46m
    configure-os        46m
    ...

The following example OpenStackDataPlaneNodeSet CR creates a node set from pre-provisioned Compute nodes with some node-specific configuration. The example includes optional fields. Review the example and update the optional fields to the correct values for your environment or remove them before using the example in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.

Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.

Note

The following variables are autogenerated from IPAM and DNS and are not provided by the user:

  • ctlplane_dns_nameservers
  • dns_search_domains
  • ctlplane_host_routes
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
  name: openstack-data-plane
  namespace: openstack
spec:
  env:
    - name: ANSIBLE_FORCE_COLOR
      value: "True"
  networkAttachments:
    - ctlplane
  preProvisioned: true
  nodeTemplate:
    ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret
    extraMounts:
      - extraVolType: Logs
        volumes:
        - name: ansible-logs
          persistentVolumeClaim:
            claimName: <pvc_name>
        mounts:
        - name: ansible-logs
          mountPath: "/runner/artifacts"
    managementNetwork: ctlplane
    ansible:
      ansibleUser: cloud-admin
      ansiblePort: 22
      ansibleVarsFrom:
        - secretRef:
            name: subscription-manager
        - secretRef:
            name: redhat-registry
      ansibleVars:
        timesync_ntp_servers:
          - hostname: ntp.example.com
            iburst: true
          - hostname: ntp2.example.com
            iburst: false
        rhc_release: 9.4
        rhc_repositories:
            - {name: "*", state: disabled}
            - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
            - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
            - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
            - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
            - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
            - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
        edpm_bootstrap_release_version_package: []
        edpm_network_config_os_net_config_mappings:
          edpm-compute-0:
            nic1: 52:54:04:60:55:22
            nic2: 53:13:10:33:24:61
        neutron_physical_bridge_name: br-ex
        edpm_network_config_nmstate: true
        edpm_network_config_update: false
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: ovs_bridge
            name: {{ neutron_physical_bridge_name }}
            mtu: {{ min_viable_mtu }}
            use_dhcp: false
            dns_servers: {{ ctlplane_dns_nameservers }}
            domain: {{ dns_search_domains }}
            addresses:
            - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
            routes: {{ ctlplane_host_routes }}
            members:
            - type: linux_bond
              name: bond0
              mtu: {{ min_viable_mtu }}
              bonding_options: "mode=802.3ad lacp_rate=fast updelay=1000 miimon=100 xmit_hash_policy=layer3+4"
              members:
              - type: interface
                name: nic1
                mtu: {{ min_viable_mtu }}
                primary: true
              - type: interface
                name: nic2
                mtu: {{ min_viable_mtu }}
          {% for network in nodeset_networks %}
            - type: vlan
              mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
              vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
              addresses:
              - ip_netmask:
                  {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
              routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
          {% endfor %}
  nodes:
    edpm-compute-0:
      hostName: edpm-compute-0
      networks:
      - name: ctlplane
        subnetName: subnet1
        defaultRoute: true
        fixedIP: 192.168.122.100
      - name: internalapi
        subnetName: subnet1
        fixedIP: 172.17.0.100
      - name: storage
        subnetName: subnet1
        fixedIP: 172.18.0.100
      - name: tenant
        subnetName: subnet1
        fixedIP: 172.19.0.100
      ansible:
        ansibleHost: 192.168.122.100
        ansibleUser: cloud-admin
        ansibleVars:
          fqdn_internal_api: edpm-compute-0.example.com
    edpm-compute-1:
      hostName: edpm-compute-1
      networks:
      - name: ctlplane
        subnetName: subnet1
        defaultRoute: true
        fixedIP: 192.168.122.101
      - name: internalapi
        subnetName: subnet1
        fixedIP: 172.17.0.101
      - name: storage
        subnetName: subnet1
        fixedIP: 172.18.0.101
      - name: tenant
        subnetName: subnet1
        fixedIP: 172.19.0.101
      ansible:
        ansibleHost: 192.168.122.101
        ansibleUser: cloud-admin
        ansibleVars:
          fqdn_internal_api: edpm-compute-1.example.com

To create a data plane with unprovisioned nodes, you must perform the following tasks:

  1. Create a BareMetalHost custom resource (CR) for each bare-metal data plane node.
  2. Define an OpenStackDataPlaneNodeSet CR for each logical grouping of unprovisioned nodes in your data plane, for example, nodes grouped by hardware, location, or networking.

5.4.1. Prerequisites

You must create a BareMetalHost custom resource (CR) for each bare-metal data plane node. At a minimum, you must provide the data required to add the bare-metal data plane node on the network so that the remaining installation steps can access the node and perform the configuration.

Note

If you use the ctlplane interface for provisioning and you have rp_filter configured on the kernel to enable Reverse Path Forwarding (RPF), then the reverse path filtering logic drops traffic. For information about how to prevent traffic being dropped because of the RPF filter, see How to prevent asymmetric routing.

Procedure

  1. The Bare Metal Operator (BMO) manages BareMetalHost custom resources (CRs) in the openshift-machine-api namespace by default. Update the Provisioning CR to watch all namespaces:

    $ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'
  2. If you are using virtual media boot for bare-metal data plane nodes and the nodes are not connected to a provisioning network, you must update the Provisioning CR to enable virtualMediaViaExternalNetwork, which enables bare-metal connectivity through the external network:

    $ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"virtualMediaViaExternalNetwork": true }}'
  3. Create a file on your workstation that defines the Secret CR with the credentials for accessing the Baseboard Management Controller (BMC) of each bare-metal data plane node in the node set:

    apiVersion: v1
    kind: Secret
    metadata:
      name: edpm-compute-0-bmc-secret
      namespace: openstack
    type: Opaque
    data:
      username: <base64_username>
      password: <base64_password>
    • Replace <base64_username> and <base64_password> with strings that are base64-encoded. You can use the following command to generate a base64-encoded string:

      $ echo -n <string> | base64
      Tip

      If you do not want to base64-encode the username and password, you can use the stringData field instead of the data field to set the username and password.

  4. Create a file named bmh_nodes.yaml on your workstation, that defines the BareMetalHost CR for each bare-metal data plane node. The following example creates a BareMetalHost CR with the provisioning method Redfish virtual media:

    apiVersion: metal3.io/v1alpha1
    kind: BareMetalHost
    metadata:
      name: edpm-compute-0
      namespace: openstack
      labels:
        app: openstack
        workload: compute
    spec:
      bmc:
        address: redfish-virtualmedia+http://192.168.111.1:8000/redfish/v1/Systems/e8efd888-f844-4fe0-9e2e-498f4ab7806d
        credentialsName: edpm-compute-0-bmc-secret
      bootMACAddress: 00:c7:e4:a7:e7:f3
      bootMode: UEFI
      online: false
     [preprovisioningNetworkDataName: <network_config_secret_name>]
    • metadata.labels: Key-value pairs, such as app, workload, and nodeName, that provide varying levels of granularity for labelling nodes. You can use these labels when you create an OpenStackDataPlaneNodeSet CR to describe the configuration of bare-metal nodes to be provisioned or to define nodes in a node set.
    • spec.bmc.address: The URL for communicating with the BMC controller of the node. For information about BMC addressing for other provisioning methods, see BMC addressing in the RHOCP Installing on bare metal guide.
    • spec.bmc.credentialsName: The name of the Secret CR you created in the previous step for accessing the BMC of the node.
    • preprovisioningNetworkDataName: An optional field that specifies the name of the network configuration secret in the local namespace to pass to the pre-provisioning image. The network configuration must be in nmstate format.

    For more information about how to create a BareMetalHost CR, see About the BareMetalHost resource in the RHOCP Installing on bare metal guide.

  5. Create the BareMetalHost resources:

    $ oc create -f bmh_nodes.yaml
  6. Verify that the BareMetalHost resources have been created and are in the Available state:

    $ oc wait --for=jsonpath='{.status.provisioning.state}'=available bmh/edpm-compute-baremetal-00 --timeout=<timeout_value>
    • Replace <timeout_value> with a value in minutes you want the command to wait for completion of the task. For example, if you want the command to wait 60 minutes, you would use the value 60m. Use a value that is appropriate to the size of your deployment. Give large deployments more time to complete deployment tasks.

You can define an OpenStackDataPlaneNodeSet custom resource (CR) for each logical grouping of unprovisioned nodes in your data plane, for example, nodes grouped by hardware, location, or networking. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR.

Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1. If you customize your control plane to include additional Compute cells, you must specify the cell to which the node set is connected. For more information on adding Compute cells, see Connecting an OpenStackDataPlaneNodeSet CR to a Compute cell in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.

You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.

Note

To set a root password for the data plane nodes during provisioning, use the passwordSecret field in the OpenStackDataPlaneNodeSet CR. For more information, see How to set a root password for the Dataplane Node on Red Hat OpenStack Services on OpenShift.

Tip

For an example OpenStackDataPlaneNodeSet CR that creates a node set from unprovisioned Compute nodes, see Example OpenStackDataPlaneNodeSet CR for unprovisioned nodes.

Prerequisites

Procedure

  1. Create a file on your workstation named openstack_unprovisioned_node_set.yaml to define the OpenStackDataPlaneNodeSet CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: openstack-data-plane
      namespace: openstack
    spec:
      tlsEnabled: true
      env:
        - name: ANSIBLE_FORCE_COLOR
          value: "True"
    • metadata.name: The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), and start and end with an alphanumeric character. Update the name in this example to a name that reflects the nodes in the set.

      Important

      The OpenStackDataPlaneNodeSet CR name you provide should be a maximum of 28 characters. Although 63 characters are allotted for the CR name, this character count includes system identifiers that are appended to the name after it is saved. Additional characters above the 63 character limit are truncated when the CR is saved and this causes errors in system processes.

    • spec.env: An optional list of environment variables to pass to the pod.
  2. Connect the data plane to the control plane network:

    spec:
      ...
      networkAttachments:
        - ctlplane
  3. Specify that the nodes in this set are unprovisioned and must be provisioned when creating the resource:

      preProvisioned: false
  4. Define the baremetalSetTemplate field to describe the configuration of the bare-metal nodes that must be provisioned when creating the resource:

      baremetalSetTemplate:
        deploymentSSHSecret: dataplane-ansible-ssh-private-key-secret
        bmhNamespace: <bmh_namespace>
        cloudUserName: <ansible_ssh_user>
        bmhLabelSelector:
          app: <bmh_label>
        ctlplaneInterface: <interface>
    • Replace <bmh_namespace> with the namespace defined in the corresponding BareMetalHost CR for the node, for example, openstack.
    • Replace <ansible_ssh_user> with the username of the Ansible SSH user, for example, cloud-admin.
    • Replace <bmh_label> with the metadata label defined in the corresponding BareMetalHost CR for the node, for example, openstack. Metadata labels, such as app, workload, and nodeName are key-value pairs for labelling nodes. Set the bmhLabelSelector field to select data plane nodes based on one or more labels that match the labels in the corresponding BareMetalHost CR.
    • Replace <interface> with the control plane interface the node connects to, for example, enp6s0.
  5. If you created a custom OpenStackProvisionServer CR, add it to your baremetalSetTemplate definition:

      baremetalSetTemplate:
        ...
        provisionServerName: my-os-provision-server
  6. Add the SSH key secret that you created to enable Ansible to connect to the data plane nodes:

      nodeTemplate:
        ansibleSSHPrivateKeySecret: <secret-key>
    • Replace <secret-key> with the name of the SSH key Secret CR you created in Creating the data plane secrets, for example, dataplane-ansible-ssh-private-key-secret.
  7. Create a Persistent Volume Claim (PVC) in the openstack namespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set the volumeMode to Filesystem and accessModes to ReadWriteOnce. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and the ansible-runner creates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and link:Red Hat OpenShift Container Platform cluster requirements in Planning your deployment.
  8. Enable persistent logging for the data plane nodes:

      nodeTemplate:
        ...
        extraMounts:
          - extraVolType: Logs
            volumes:
            - name: ansible-logs
              persistentVolumeClaim:
                claimName: <pvc_name>
            mounts:
            - name: ansible-logs
              mountPath: "/runner/artifacts"
    • Replace <pvc_name> with the name of the PVC storage on your RHOCP cluster.
  9. Specify the management network:

      nodeTemplate:
        ...
        managementNetwork: ctlplane
  10. Specify the Secret CRs used to source the usernames and passwords to register the operating system of your nodes and to enable repositories. The following example demonstrates how to register your nodes to Red Hat Content Delivery Network (CDN). For information about how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.

      nodeTemplate:
        ansible:
          ansibleUser: cloud-admin
          ansiblePort: 22
          ansibleVarsFrom:
            - secretRef:
                name: subscription-manager
            - secretRef:
                name: redhat-registry
          ansibleVars:
            rhc_release: 9.4
            rhc_repositories:
                - {name: "*", state: disabled}
                - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
                - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
                - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
                - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
                - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
                - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
            edpm_bootstrap_release_version_package: []
  11. Add the network configuration template to apply to your data plane nodes. The following example applies a network configuration to the data plane nodes:

      nodeTemplate:
        ...
        ansible:
          ...
          ansibleVars:
            ...
            edpm_network_config_os_net_config_mappings:
              edpm-compute-0:
                nic1: 52:54:04:60:55:22
                nic2: 53:13:10:33:24:61
            neutron_physical_bridge_name: br-ex
            edpm_network_config_nmstate: true
            edpm_network_config_update: false
            edpm_network_config_template: |
              ---
              {% set mtu_list = [ctlplane_mtu] %}
              {% for network in nodeset_networks %}
              {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
              {%- endfor %}
              {% set min_viable_mtu = mtu_list | max %}
              network_config:
              - type: ovs_bridge
                name: {{ neutron_physical_bridge_name }}
                mtu: {{ min_viable_mtu }}
                use_dhcp: false
                dns_servers: {{ ctlplane_dns_nameservers }}
                domain: {{ dns_search_domains }}
                addresses:
                - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
                routes: {{ ctlplane_host_routes }}
                members:
                - type: linux_bond
                  name: bond0
                  mtu: {{ min_viable_mtu }}
                  bonding_options: "mode=802.3ad lacp_rate=fast updelay=1000 miimon=100 xmit_hash_policy=layer3+4"
                  members:
                  - type: interface
                    name: nic1
                    mtu: {{ min_viable_mtu }}
                    primary: true
                  - type: interface
                    name: nic2
                    mtu: {{ min_viable_mtu }}
              {% for network in nodeset_networks %}
                - type: vlan
                  mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
                  vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
                  addresses:
                  - ip_netmask:
                      {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
                  routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
              {% endfor %}
    • nic1 and nic2: The MAC addresses assigned to the NIC to use for network configuration on the Compute node.
    • neutron_physical_bridge_name: The name of the OVS bridge to setup on the Compute node.
    • edpm_network_config_nmstate: Sets the os-net-config provider to nmstate. The default value is true. Change it to false only if a specific limitation of the nmstate provider requires you to use the ifcfg provider now. In a future release after nmstate limitations are resolved, the ifcfg provider will be deprecated and removed. In this RHOSO release, adoption of a RHOSP 17.1 deployment with the nmstate provider is not supported. For this and other limitations of RHOSO nmstate support, see https://issues.redhat.com/browse/OSPRH-11309.
    • edpm_network_config_update: When deploying a node set for the first time, ensure that the edpm_network_config_update variable is set to false. If you later update a edpm_network_config_template, first set edpm_network_config_update to true. After you complete the update, reset it to false.

      Important

      After an edpm_network_config_template update, you must reset edpm_network_config_update to false. Otherwise, the nodes could lose network access. Whenever edpm_network_config_update is true, the updated network configuration is reapplied every time an OpenStackDataPlaneDeployment CR is created that includes the configure-network service that is a member of the servicesOverride list.

    • dns_servers: Autogenerated from IPAM and DNS and no user input is required.
    • domain: Autogenerated from IPAM and DNS and no user input is required.
    • routes: Autogenerated from IPAM and DNS and no user input is required.
  12. Add the common configuration for the set of nodes in this group under the nodeTemplate section. Each node in this OpenStackDataPlaneNodeSet inherits this configuration. For information about the properties you can use to configure common node attributes, see OpenStackDataPlaneNodeSet CR properties.
  13. Add the time synchronization configuration to align the node time with a central source. Multiple NTP servers can be defined in this configuration. Time synchronization is provided by the chrony service. The following example creates a time synchronization configuration for all nodes in the nodeset use:

      nodeTemplate:
            ...
        ansible:
            ...
          ansibleVars:
            timesync_ntp_servers:
              - hostname: <ntp_server>
                iburst: <burst_value>
    • Replace <ntp_server> with the hostname or IP address of the NTP server.
    • Replace <burst_value> with either true or false. This configures fast initial synchronization with the NTP server. The default is false.

      Note

      Configure time synchronization at the node level by adding it in the ansibleVars section of the individual node.

  14. Define each node in this node set:

      nodes:
        edpm-compute-0:
          hostName: edpm-compute-0
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.100
          - name: internalapi
            subnetName: subnet1
          - name: storage
            subnetName: subnet1
          - name: tenant
            subnetName: subnet1
          networkData:
            name: edpm-compute-0-network-data
            namespace: openstack
          userData:
            name: edpm-compute-0-user-data
            namespace: openstack
          ansible:
            ansibleHost: 192.168.122.100
            ansibleUser: cloud-admin
            ansibleVars:
              fqdn_internal_api: edpm-compute-0.example.com
          bmhLabelSelector:
            nodeName: edpm-compute-0
        edpm-compute-1:
          hostName: edpm-compute-1
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.101
          - name: internalapi
            subnetName: subnet1
          - name: storage
            subnetName: subnet1
          - name: tenant
            subnetName: subnet1
          networkData:
            name: edpm-compute-1-network-data
            namespace: openstack
          userData:
            name: edpm-compute-1-user-data
            namespace: openstack
          ansible:
            ansibleHost: 192.168.122.101
            ansibleUser: cloud-admin
            ansibleVars:
              fqdn_internal_api: edpm-compute-1.example.com
          bmhLabelSelector:
            nodeName: edpm-compute-1
    • edpm-compute-0: The node definition reference, for example, edpm-compute-0. Each node in the node set must have a node definition.
    • networks: Defines the IPAM and the DNS records for the node.
    • fixedIP: Specifies a predictable IP address for the network that must be in the allocation range defined for the network in the NetConfig CR.
    • networkData: The Secret that contains network configuration that is node-specific.
    • userData.name: The Secret that contains user data that is node-specific.
    • ansibleHost: Overrides the hostname or IP address that Ansible uses to connect to the node. The default value is the value set for the hostName field for the node or the node definition reference, for example, edpm-compute-0.
    • ansibleVars: Node-specific Ansible variables that customize the node.
    • bmhLabelSelector: Metadata labels, such as app, workload, and nodeName are key-value pairs for labelling nodes. Set the bmhLabelSelector field to select data plane nodes based on one or more labels that match the labels in the corresponding BareMetalHost CR.

      Note
      • Nodes defined within the nodes section can configure the same Ansible variables that are configured in the nodeTemplate section. Where an Ansible variable is configured for both a specific node and within the nodeTemplate section, the node-specific values override those from the nodeTemplate section.
      • You do not need to replicate all the nodeTemplate Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node.
      • Many ansibleVars include edpm in the name, which stands for "External Data Plane Management".

        For information about the properties you can use to configure node attributes, see OpenStackDataPlaneNodeSet CR properties.

  15. Save the openstack_unprovisioned_node_set.yaml definition file.
  16. Create the data plane resources:

    $ oc create --save-config -f openstack_unprovisioned_node_set.yaml -n openstack

Verification

  1. Verify that the data plane resources have been created by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m

    When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error.

    For information about the data plane conditions and states, see Data plane conditions and states.

  2. Verify that the Secret resource was created for the node set:

    $ oc get secret -n openstack | grep openstack-data-plane
    dataplanenodeset-openstack-data-plane Opaque 1 3m50s
  3. Verify that the nodes have transitioned to the provisioned state:

    $ oc get bmh
    NAME            STATE         CONSUMER               ONLINE   ERROR   AGE
    edpm-compute-0  provisioned   openstack-data-plane   true             3d21h
    Note

    A provisioned node displays provisioned in the STATE column and true in the ONLINE column. If the node displays as provisioned in the STATE column but false in the ONLINE column, the node is provisioned but it has experienced an error condition such as the loss of network connectivity, use of incorrect credentials, or the Baseboard Management Controller (BMC) is not accessible. Consult the Baremetal Provisioning Service (ironic) logs to determine if a cause for this error condition can be found. If the error condition persists, contact Red Hat Support.

  4. Verify that the services were created:

    $ oc get openstackdataplaneservice -n openstack
    NAME                    AGE
    bootstrap               8m40s
    ceph-client             8m40s
    ceph-hci-pre            8m40s
    configure-network       8m40s
    configure-os            8m40s
    ...

The following example OpenStackDataPlaneNodeSet CR creates a node set from unprovisioned Compute nodes with some node-specific configuration. The unprovisioned Compute nodes are provisioned when the node set is created. The example includes optional fields. Review the example and update the optional fields to the correct values for your environment or remove them before using the example in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.

Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.

Note

The following variables are autogenerated from IPAM and DNS and are not provided by the user:

  • ctlplane_dns_nameservers
  • dns_search_domains
  • ctlplane_host_routes
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
  name: openstack-data-plane
  namespace: openstack
spec:
  env:
    - name: ANSIBLE_FORCE_COLOR
      value: "True"
  networkAttachments:
    - ctlplane
  preProvisioned: false
  baremetalSetTemplate:
    deploymentSSHSecret: dataplane-ansible-ssh-private-key-secret
    bmhNamespace: openstack
    cloudUserName: cloud-admin
    bmhLabelSelector:
      app: openstack
    ctlplaneInterface: enp1s0
  nodeTemplate:
    ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret
    extraMounts:
      - extraVolType: Logs
        volumes:
        - name: ansible-logs
          persistentVolumeClaim:
            claimName: <pvc_name>
        mounts:
        - name: ansible-logs
          mountPath: "/runner/artifacts"
    managementNetwork: ctlplane
    ansible:
      ansibleUser: cloud-admin
      ansiblePort: 22
      ansibleVarsFrom:
        - secretRef:
            name: subscription-manager
        - secretRef:
            name: redhat-registry
      ansibleVars:
        timesync_ntp_servers:
          - hostname: ntp.example.com
            iburst: true
          - hostname: ntp2.example.com
            iburst: false
        rhc_release: 9.4
        rhc_repositories:
            - {name: "*", state: disabled}
            - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
            - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
            - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
            - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
            - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
            - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
        edpm_bootstrap_release_version_package: []
        edpm_network_config_os_net_config_mappings:
          edpm-compute-0:
            nic1: 52:54:04:60:55:22
          edpm-compute-1:
            nic1: 52:54:04:60:55:22
        neutron_physical_bridge_name: br-ex
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: ovs_bridge
            name: {{ neutron_physical_bridge_name }}
            mtu: {{ min_viable_mtu }}
            use_dhcp: false
            dns_servers: {{ ctlplane_dns_nameservers }}
            domain: {{ dns_search_domains }}
            addresses:
            - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
            routes: {{ ctlplane_host_routes }}
            members:
            - type: interface
              name: nic1
              mtu: {{ min_viable_mtu }}
              # force the MAC address of the bridge to this interface
              primary: true
          {% for network in nodeset_networks %}
            - type: vlan
              mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
              vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
              addresses:
              - ip_netmask:
                  {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
              routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
          {% endfor %}
  nodes:
    edpm-compute-0:
      hostName: edpm-compute-0
      networks:
      - name: ctlplane
        subnetName: subnet1
        defaultRoute: true
        fixedIP: 192.168.122.100
      - name: internalapi
        subnetName: subnet1
      - name: storage
        subnetName: subnet1
      - name: tenant
        subnetName: subnet1
      ansible:
        ansibleHost: 192.168.122.100
        ansibleUser: cloud-admin
        ansibleVars:
          fqdn_internal_api: edpm-compute-0.example.com
      bmhLabelSelector:
        nodeName: edpm-compute-0
    edpm-compute-1:
      hostName: edpm-compute-1
      networks:
      - name: ctlplane
        subnetName: subnet1
        defaultRoute: true
        fixedIP: 192.168.122.101
      - name: internalapi
        subnetName: subnet1
      - name: storage
        subnetName: subnet1
      - name: tenant
        subnetName: subnet1
      ansible:
        ansibleHost: 192.168.122.101
        ansibleUser: cloud-admin
        ansibleVars:
          fqdn_internal_api: edpm-compute-1.example.com
      bmhLabelSelector:
        nodeName: edpm-compute-1

Configure network interface bonding, also known as NIC teaming, on the control plane network to provide redundancy and increased throughput for data plane nodes.

Prerequisites

  • Ensure your network environment is configured to support network interface bonding before implementing this configuration. Network environment configuration, such as the implementation and use of Link Aggregation Control Protocol (LACP), should be performed before implementing network interface bonding in your data plane nodes. For more information about the appropriate network environment configuration, see the vendor documentation for the switches and routers used in your network.

Procedure

  1. Open the OpenStackDataPlaneNodeSet CR definition file for the node set you want to update, for example, openstack_data_plane.yaml.
  2. Locate the bareMetalSetTemplate section of the definition file.
  3. Configure network interface bonding using the ctrlplaneInterface and ctrlplaneBond attributes. The following is an example of this configuration:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: openstack-edpm
    spec:
      baremetalSetTemplate:
        ctlplaneInterface: bond0
        ctlplaneBond:
          bondInterfaces:
            - eno1
            - eno2
          bondMode: "802.3ad"
          bondOptions:
            bond-miimon: "100"
            bond-xmit-hash-policy: "layer3+4"

    where:

    ctrlplaneInterface
    The control plane network interface the data plane node set uses. This should be set to the bond interface name.
    ctrlplaneBond
    The options that make up the network interface bonding configuration.
    bondInterfaces
    The list of physical interfaces to bond. A minimum of two interfaces are required.
    bondMode

    The mode used to handle traffic across multiple interfaces in the bond. The following modes are supported:

    • active-backup: Traffic is on only one interface at a time in the bond. If one interface fails, the other takes over. This is the default mode.
    • 802.3ad: Traffic is routed using IEEE 802.3ad Dynamic Link Aggregation. The physical switch must be configured for an LACP port channel.
    • balance-rr: Traffic is load balanced using a round-robin policy.
    • balance-xor: Traffic is balanced using a XOR policy.
    bondOptions
    Additional bonding options configured as key-pair values.
  4. Save the OpenStackDataPlaneNodeSet CR definition file.
  5. Apply the updated OpenStackDataPlaneNodeSet CR configuration:

    $ oc apply -f openstack_data_plane.yaml
  6. Verify that the data plane resource has been updated by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m

    When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error.

    For information about the data plane conditions and states, see Data plane conditions and states in _Deploying Red Hat OpenStack Services on OpenShift.

  7. Verify that the nodes have transitioned to the provisioned state:

    $ oc get bmh -n openstack
    NAME            STATE         CONSUMER               ONLINE   ERROR   AGE
    edpm-compute-0  provisioned   openstack-data-plane   true             3d21h
  8. Verify node connectivity by viewing the Ansible logs:

    $ oc logs -l app=openstackansibleee --tail=-1 -f

    If a node encounters SSH or networking issues, the Ansible log displays unreachable or connection refused for the node network address.

5.4.6. How to prevent asymmetric routing

If the Red Hat OpenShift Container Platform (RHOCP) cluster nodes have an interface with an IP address in the same IP subnet as that used by the nodes when provisioning, it causes asymmetric traffic. Therefore, if you use the ctlplane interface for provisioning and you have rp_filter configured on the kernel to enable Reverse Path Forwarding (RPF), then the reverse path filtering logic drops traffic. Implement one of the following methods to prevent traffic being dropped because of the RPF filter:

  • Use a dedicated NIC on the network where RHOCP binds the provisioning service, that is, the RHOCP machine network or a dedicated RHOCP provisioning network.
  • Use a dedicated NIC on a network that is reachable through routing on the RHOCP master nodes. For information about how to add routes to your RHOCP networks, see Adding routes to the RHOCP networks in Customizing the Red Hat OpenStack Services on OpenShift deployment.
  • Use a shared NIC for provisioning and the RHOSO ctlplane interface. You can use one of the following methods to configure a shared NIC:

    • Configure your network to support two IP ranges by configuring two IP addresses on the router interface: one in the address range you use for the ctlplane network and the other in the address range that you use for provisioning.
    • Configure a DHCP server to allocate an address range for provisioning that is different from the ctlplane address range.
    • If a DHCP server is not available, configure the preprovisioningNetworkData field on the BareMetalHost CRs. For information about how to configure the preprovisioningNetworkData field, see Configuring preprovisioningNetworkData on the BareMetalHost CRs.
  • If your environment has RHOCP master and worker nodes that are not connected to the network used by the EDPM nodes, you can set the nodeSelector field on the OpenStackProvisionServer CR to place it on a worker node that does not have an interface with an IP address in the same IP subnet as that used by the nodes when provisioning.

If you use the ctlplane interface for provisioning and you have rp_filter configured on the kernel to enable Reverse Path Forwarding (RPF), then the reverse path filtering logic drops traffic. To prevent traffic being dropped because of the RPF filter, you can configure the preprovisioningNetworkData field on the BareMetalHost CRs.

Procedure

  1. Create a Secret CR with preprovisioningNetworkData in nmstate format for each BareMetalHost CR:

    apiVersion: v1
    kind: Secret
    metadata:
      name: leaf0-0-preprovision-network-data
      namespace: openstack
    stringData:
      nmstate: |
        interfaces:
          - name: enp5s0
            type: ethernet
            state: up
            ipv4:
              enabled: true
              address:
                - ip: 192.168.130.100
                  prefix-length: 24
        dns-resolver:
          config:
            server:
              - 192.168.122.1
        routes:
          config:
            - destination: 0.0.0.0/0
              next-hop-address: 192.168.130.1
              next-hop-interface: enp5s0
    type: Opaque
  2. Create the Secret resources:

    $ oc create -f secret_leaf0-0.yaml
  3. Open the BareMetalHost CR file, for example, bmh_nodes.yaml.
  4. Add the preprovisioningNetworkDataName field to each BareMetalHost CR defined for each node in the bmh_nodes.yaml file:

    apiVersion: metal3.io/v1alpha1
    kind: BareMetalHost
    metadata:
      annotations:
        inspect.metal3.io: disabled
      labels:
        app: openstack
        nodeset: leaf0
      name: leaf0-0
      namespace: openstack
    spec:
      architecture: x86_64
      automatedCleaningMode: metadata
      bmc:
        address: redfish-virtualmedia+http://sushy.utility:8000/redfish/v1/Systems/df2bf92f-3e2c-47e1-b1fa-0d2e06bd1b1d
        credentialsName: bmc-secret
      bootMACAddress: 52:54:04:15:a8:d9
      bootMode: UEFI
      online: false
      preprovisioningNetworkDataName: leaf0-0-preprovision-network-data
      rootDeviceHints:
        deviceName: /dev/sda
  5. Update the BareMetalHost CRs:

    $ oc apply -f bmh_nodes.yaml

5.5. OpenStackDataPlaneNodeSet CR spec properties

The following sections detail the OpenStackDataPlaneNodeSet CR spec properties you can configure.

5.5.1. nodeTemplate

Defines the common attributes for the nodes in this OpenStackDataPlaneNodeSet. You can override these common attributes in the definition for each individual node.

Expand
Table 5.1. nodeTemplate properties
FieldDescription

ansibleSSHPrivateKeySecret

Name of the private SSH key secret that contains the private SSH key for connecting to nodes.

Secret name format: Secret.data.ssh-privatekey

For more information, see Creating an SSH authentication secret.

Default: dataplane-ansible-ssh-private-key-secret

managementNetwork

Name of the network to use for management (SSH/Ansible). Default: ctlplane

networks

Network definitions for the OpenStackDataPlaneNodeSet.

ansible

Ansible configuration options. For more information, see ansible properties.

extraMounts

The files to mount into an Ansible Execution Pod.

userData

UserData configuration for the OpenStackDataPlaneNodeSet.

networkData

NetworkData configuration for the OpenStackDataPlaneNodeSet.

5.5.2. nodes

Defines the node names and node-specific attributes for the nodes in this OpenStackDataPlaneNodeSet. Overrides the common attributes defined in the nodeTemplate.

Expand
Table 5.2. nodes properties
FieldDescription

ansible

Ansible configuration options. For more information, see ansible properties.

extraMounts

The files to mount into an Ansible Execution Pod.

hostName

The node name.

managementNetwork

Name of the network to use for management (SSH/Ansible).

networkData

NetworkData configuration for the node.

networks

Instance networks.

userData

Node-specific user data.

5.5.3. ansible

Defines the group of Ansible configuration options.

Expand
Table 5.3. ansible properties
FieldDescription

ansibleUser

The user associated with the secret you created in Creating the data plane secrets. Default: rhel-user

ansibleHost

SSH host for the Ansible connection.

ansiblePort

SSH port for the Ansible connection.

ansibleVars

The Ansible variables that customize the set of nodes. You can use this property to configure any custom Ansible variable, including the Ansible variables available for each edpm-ansible role. For a complete list of Ansible variables by role, see the edpm-ansible documentation.

Note

The ansibleVars parameters that you can configure for an OpenStackDataPlaneNodeSet CR are determined by the services defined for the OpenStackDataPlaneNodeSet. The OpenStackDataPlaneService CRs call the Ansible playbooks from the edpm-ansible playbook collection, which include the roles that are executed as part of the data plane service.

ansibleVarsFrom

A list of sources to populate Ansible variables from. Values defined by an AnsibleVars with a duplicate key take precedence. For more information, see ansibleVarsFrom properties.

5.5.4. ansibleVarsFrom

Defines the list of sources to populate Ansible variables from.

Expand
Table 5.4. ansibleVarsFrom properties
FieldDescription

prefix

An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.

configMapRef

The ConfigMap CR to select the ansibleVars from.

secretRef

The Secret CR to select the ansibleVars from.

5.6. Deploying the data plane

You use the OpenStackDataPlaneDeployment custom resource definition (CRD) to configure the services on the data plane nodes and deploy the data plane. You control the execution of Ansible on the data plane by creating OpenStackDataPlaneDeployment custom resources (CRs). Each OpenStackDataPlaneDeployment CR models a single Ansible execution. Create an OpenStackDataPlaneDeployment CR to deploy each of your OpenStackDataPlaneNodeSet CRs.

Note

When the OpenStackDataPlaneDeployment successfully completes execution, it does not automatically execute the Ansible again, even if the OpenStackDataPlaneDeployment or related OpenStackDataPlaneNodeSet resources are changed. To start another Ansible execution, you must create another OpenStackDataPlaneDeployment CR. Remove any failed OpenStackDataPlaneDeployment CRs in your environment before creating a new one to allow the new OpenStackDataPlaneDeployment to run Ansible with an updated Secret.

Procedure

  1. Create a file on your workstation named openstack_data_plane_deploy.yaml to define the OpenStackDataPlaneDeployment CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: data-plane-deploy
      namespace: openstack
    • metadata.name: The OpenStackDataPlaneDeployment CR name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the node sets in the deployment.
  2. Add all the OpenStackDataPlaneNodeSet CRs that you want to deploy:

    spec:
      nodeSets:
        - openstack-data-plane
        - <nodeSet_name>
        - ...
        - <nodeSet_name>
    • Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment.
  3. Save the openstack_data_plane_deploy.yaml deployment file.
  4. Deploy the data plane:

    $ oc create -f openstack_data_plane_deploy.yaml -n openstack

    You can view the Ansible logs while the deployment executes:

    $ oc get pod -l app=openstackansibleee -w
    $ oc logs -l app=openstackansibleee -f --max-log-requests 10

    If the oc logs command returns an error similar to the following error, increase the --max-log-requests value:

    error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
  5. Verify that the data plane is deployed:

    $ oc wait openstackdataplanedeployment data-plane-deploy --for=condition=Ready --timeout=<timeout_value>
    $ oc wait openstackdataplanenodeset openstack-data-plane --for=condition=Ready --timeout=<timeout_value>
    • Replace <timeout_value> with a value in minutes you want the command to wait for completion of the task. For example, if you want the command to wait 60 minutes, you would use the value 60m. If the completion status of SetupReady for oc wait openstackdataplanedeployment or NodeSetReady for oc wait openstackdataplanenodeset is not returned in this time frame, the command returns a timeout error. Use a value that is appropriate to the size of your deployment. Give larger deployments more time to complete deployment tasks.

      For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

  6. Map the Compute nodes to the Compute cell that they are connected to:

    $ oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose

    If you did not create additional cells, this command maps the Compute nodes to cell1.

  7. Access the remote shell for the openstackclient pod and verify that the deployed Compute nodes are visible on the control plane:

    $ oc rsh -n openstack openstackclient
    $ openstack hypervisor list

    If some Compute nodes are missing from the hypervisor list, retry the previous step. If the Compute nodes are still missing from the list, check the status and health of the nova-compute services on the deployed data plane nodes.

  8. Verify that the hypervisor hostname is a fully qualified domain name (FQDN):

    $ hostname -f

    If the hypervisor hostname is not an FQDN, for example, if it was registered as a short name or full name instead, contact Red Hat Support.

5.7. Data plane conditions and states

Each data plane resource has a series of conditions within their status subresource that indicates the overall state of the resource, including its deployment progress.

For an OpenStackDataPlaneNodeSet, until an OpenStackDataPlaneDeployment has been started and finished successfully, the Ready condition is False. When the deployment succeeds, the Ready condition is set to True. A subsequent deployment sets the Ready condition to False until the deployment succeeds, when the Ready condition is set to True.

Expand
Table 5.5. OpenStackDataPlaneNodeSet CR conditions
ConditionDescription

Ready

  • "True": The OpenStackDataPlaneNodeSet CR is successfully deployed.
  • "False": The deployment is not yet requested or has failed, or there are other failed conditions.

SetupReady

"True": All setup tasks for a resource are complete. Setup tasks include verifying the SSH key secret, verifying other fields on the resource, and creating the Ansible inventory for each resource. Each service-specific condition is set to "True" when that service completes deployment. You can check the service conditions to see which services have completed their deployment, or which services failed.

DeploymentReady

"True": The NodeSet has been successfully deployed.

InputReady

"True": The required inputs are available and ready.

NodeSetDNSDataReady

"True": DNSData resources are ready.

NodeSetIPReservationReady

"True": The IPSet resources are ready.

NodeSetBaremetalProvisionReady

"True": Bare-metal nodes are provisioned and ready.

Expand
Table 5.6. OpenStackDataPlaneNodeSet status fields
Status fieldDescription

Deployed

  • "True": The OpenStackDataPlaneNodeSet CR is successfully deployed.
  • "False": The deployment is not yet requested or has failed, or there are other failed conditions.

DNSClusterAddresses

 

CtlplaneSearchDomain

 
Expand
Table 5.7. OpenStackDataPlaneDeployment CR conditions
ConditionDescription

Ready

  • "True": The data plane is successfully deployed.
  • "False": The data plane deployment failed, or there are other failed conditions.

DeploymentReady

"True": The data plane is successfully deployed.

InputReady

"True": The required inputs are available and ready.

<NodeSet> Deployment Ready

"True": The deployment has succeeded for the named NodeSet, indicating all services for the NodeSet have succeeded.

<NodeSet> <Service> Deployment Ready

"True": The deployment has succeeded for the named NodeSet and Service. Each <NodeSet> <Service> Deployment Ready specific condition is set to "True" as that service completes successfully for the named NodeSet. Once all services are complete for a NodeSet, the <NodeSet> Deployment Ready condition is set to "True". The service conditions indicate which services have completed their deployment, or which services failed and for which NodeSets.

Expand
Table 5.8. OpenStackDataPlaneDeployment status fields
Status fieldDescription

Deployed

  • "True": The data plane is successfully deployed. All Services for all NodeSets have succeeded.
  • "False": The deployment is not yet requested or has failed, or there are other failed conditions.
Expand
Table 5.9. OpenStackDataPlaneService CR conditions
ConditionDescription

Ready

"True": The service has been created and is ready for use. "False": The service has failed to be created.

To troubleshoot a deployment when services are not deploying or operating correctly, you can check the job condition message for the service, and you can check the logs for a node set.

Each data plane deployment in the environment has associated services. Each of these services have a job condition message that matches the current status of the AnsibleEE job executing for that service. You can use this information to troubleshoot deployments when services are not deploying or operating correctly.

Procedure

  1. Determine the name and status of all deployments:

    $ oc get openstackdataplanedeployment

    The following example output shows two deployments currently in progress:

    $ oc get openstackdataplanedeployment
    
    NAME                   NODESETS             STATUS   MESSAGE
    edpm-compute   ["openstack-edpm-ipam"]   False    Deployment in progress
  2. Retrieve and inspect Ansible execution jobs.

    The Kubernetes jobs are labelled with the name of the OpenStackDataPlaneDeployment. You can list jobs for each OpenStackDataPlaneDeployment by using the label:

     $ oc get job -l openstackdataplanedeployment=edpm-compute
     NAME                                                 STATUS     COMPLETIONS   DURATION   AGE
     bootstrap-edpm-compute-openstack-edpm-ipam           Complete   1/1           78s        25h
     configure-network-edpm-compute-openstack-edpm-ipam   Complete   1/1           37s        25h
     configure-os-edpm-compute-openstack-edpm-ipam        Complete   1/1           66s        25h
     download-cache-edpm-compute-openstack-edpm-ipam      Complete   1/1           64s        25h
     install-certs-edpm-compute-openstack-edpm-ipam       Complete   1/1           46s        25h
     install-os-edpm-compute-openstack-edpm-ipam          Complete   1/1           57s        25h
     libvirt-edpm-compute-openstack-edpm-ipam             Complete   1/1           2m37s      25h
     neutron-metadata-edpm-compute-openstack-edpm-ipam    Complete   1/1           61s        25h
     nova-edpm-compute-openstack-edpm-ipam                Complete   1/1           3m20s      25h
     ovn-edpm-compute-openstack-edpm-ipam                 Complete   1/1           78s        25h
     run-os-edpm-compute-openstack-edpm-ipam              Complete   1/1           33s        25h
     ssh-known-hosts-edpm-compute                         Complete   1/1           19s        25h
     telemetry-edpm-compute-openstack-edpm-ipam           Complete   1/1           2m5s       25h
     validate-network-edpm-compute-openstack-edpm-ipam    Complete   1/1           16s        25h

    You can check logs by using oc logs -f job/<job-name>, for example, if you want to check the logs from the configure-network job:

     $ oc logs -f jobs/configure-network-edpm-compute-openstack-edpm-ipam | tail -n2
     PLAY RECAP *********************************************************************
     edpm-compute-0             : ok=22   changed=0    unreachable=0    failed=0    skipped=17   rescued=0    ignored=0
5.8.1.1. Job condition messages

AnsibleEE jobs have an associated condition message that indicates the current state of the service job. This condition message is displayed in the MESSAGE field of the oc get job <job_name> command output. Jobs return one of the following conditions when queried:

  • Job not started: The job has not started.
  • Job not found: The job could not be found.
  • Job is running: The job is currently running.
  • Job complete: The job execution is complete.
  • Job error occurred <error_message>: The job stopped executing unexpectedly. The <error_message> is replaced with a specific error message.

To further investigate a service that is displaying a particular job condition message, view its logs by using the command oc logs job/<service>. For example, to view the logs for the repo-setup-openstack-edpm service, use the command oc logs job/repo-setup-openstack-edpm.

5.8.2. Checking the logs for a node set

You can access the logs for a node set to check for deployment issues.

Procedure

  1. Retrieve pods with the OpenStackAnsibleEE label:

    $ oc get pods -l app=openstackansibleee
    configure-network-edpm-compute-j6r4l   0/1     Completed           0          3m36s
    validate-network-edpm-compute-6g7n9    0/1     Pending             0          0s
    validate-network-edpm-compute-6g7n9    0/1     ContainerCreating   0          11s
    validate-network-edpm-compute-6g7n9    1/1     Running             0          13s
  2. SSH into the pod you want to check:

    1. Pod that is running:

      $ oc rsh validate-network-edpm-compute-6g7n9
    2. Pod that is not running:

      $ oc debug configure-network-edpm-compute-j6r4l
  3. List the directories in the /runner/artifacts mount:

    $ ls /runner/artifacts
    configure-network-edpm-compute
    validate-network-edpm-compute
  4. View the stdout for the required artifact:

    $ cat /runner/artifacts/configure-network-edpm-compute/stdout

Legal Notice

Copyright © Red Hat.
Except as otherwise noted below, the text of and illustrations in this documentation are licensed by Red Hat under the Creative Commons Attribution–Share Alike 3.0 Unported license . If you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, the Red Hat logo, JBoss, Hibernate, and RHCE are trademarks or registered trademarks of Red Hat, LLC. or its subsidiaries in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS is a trademark or registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United States and other countries.
The OpenStack® Word Mark and OpenStack logo are trademarks or registered trademarks of the Linux Foundation, used under license.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top