Chapter 7. Creating the data plane for dynamic routing


The Red Hat OpenStack Services on OpenShift (RHOSO) data plane consists of RHEL 9.4 nodes. Use the OpenStackDataPlaneNodeSet custom resource definition (CRD) to create the custom resources (CRs) that define the nodes and the layout of the data plane. An OpenStackDataPlaneNodeSet CR is a logical grouping of nodes of a similar type. A data plane typically consists of multiple OpenStackDataPlaneNodeSet CRs to define groups of nodes with different configurations and roles. You can use pre-provisioned or unprovisioned nodes in an OpenStackDataPlaneNodeSet CR:

  • Pre-provisioned node: You have used your own tooling to install the operating system on the node before adding it to the data plane.
  • Unprovisioned node: The node does not have an operating system installed before you add it to the data plane. The node is provisioned by using the Cluster Baremetal Operator (CBO) as part of the data plane creation and deployment process.
Note

You cannot include both pre-provisioned and unprovisioned nodes in the same OpenStackDataPlaneNodeSet CR.

Important

Currently, in dynamic routing environments, there is a limitation where the RHOSO control plane nodes cannot be configured as data plane gateway nodes. For this reason, you must have dedicated Networker nodes that host the OVN gateway chassis. This limitation will be solved in a future RHOSO release. For more information, see OSPRH-661.

To create and deploy a data plane, you must perform the following tasks:

  1. Create a Secret CR for each node set for Ansible to use to execute commands on the data plane nodes.
  2. Create the OpenStackDataPlaneNodeSet CRs that define the nodes and layout of the data plane.
  3. Create the OpenStackDataPlaneDeployment CR that triggers the Ansible execution that deploys and configures the software for the specified list of OpenStackDataPlaneNodeSet CRs.

The following procedures create simple node sets, one with pre-provisioned nodes, and one with bare-metal nodes that must be provisioned during the node set deployment. Use these procedures to set up an initial environment that you can test, before adding the customizations that your production environment requires.

You can add additional node sets to a deployed environment, and you can customize your deployed environment by updating the common configuration in the default ConfigMap CR for the service, and by creating custom services. For more information about how to customize your data plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.

7.1. Prerequisites

  • A functional control plane, created with the OpenStack Operator. For more information, see Creating the control plane.
  • You are logged on to a workstation that has access to the Red Hat OpenShift Container Platform (RHOCP) cluster as a user with cluster-admin privileges.

7.2. Creating the data plane secrets

The data plane requires several Secret custom resources (CRs) to operate. The Secret CRs are used by the data plane nodes for the following functionality:

  • To enable secure access between nodes:

    • You must generate an SSH key and create an SSH key Secret CR for each key to enable Ansible to manage the RHEL nodes on the data plane. Ansible executes commands with this user and key. You can create an SSH key for each OpenStackDataPlaneNodeSet CR in your data plane.
    • You must generate an SSH key and create an SSH key Secret CR for each key to enable migration of instances between Compute nodes.
  • To register the operating system of the nodes that are not registered to the Red Hat Customer Portal.
  • To enable repositories for the nodes.
  • To provide Compute nodes with access to libvirt.

Prerequisites

  • Pre-provisioned nodes are configured with an SSH public key in the $HOME/.ssh/authorized_keys file for a user with passwordless sudo privileges. For more information, see Managing sudo access in the RHEL Configuring basic system settings guide.

Procedure

  1. For unprovisioned nodes, create the SSH key pair for Ansible:

    $ ssh-keygen -f <key_file_name> -N "" -t rsa -b 4096
    Copy to Clipboard Toggle word wrap
    • Replace <key_file_name> with the name to use for the key pair.
  2. Create the Secret CR for Ansible and apply it to the cluster:

    $ oc create secret generic dataplane-ansible-ssh-private-key-secret \
    --save-config \
    --dry-run=client \
    --from-file=ssh-privatekey=<key_file_name> \
    --from-file=ssh-publickey=<key_file_name>.pub \
    [--from-file=authorized_keys=<key_file_name>.pub] -n openstack \
    -o yaml | oc apply -f -
    Copy to Clipboard Toggle word wrap
    • Replace <key_file_name> with the name and location of your SSH key pair file.
    • Optional: Only include the --from-file=authorized_keys option for bare-metal nodes that must be provisioned when creating the data plane.
  3. If you are creating Compute nodes, create a secret for migration.

    1. Create the SSH key pair for instance migration:

      $ ssh-keygen -f ./nova-migration-ssh-key -t ecdsa-sha2-nistp521 -N ''
      Copy to Clipboard Toggle word wrap
    2. Create the Secret CR for migration and apply it to the cluster:

      $ oc create secret generic nova-migration-ssh-key \
      --save-config \
      --from-file=ssh-privatekey=nova-migration-ssh-key \
      --from-file=ssh-publickey=nova-migration-ssh-key.pub \
      -n openstack \
      -o yaml | oc apply -f -
      Copy to Clipboard Toggle word wrap
  4. For nodes that have not been registered to the Red Hat Customer Portal, create the Secret CR for subscription-manager credentials to register the nodes:

    $ oc create secret generic subscription-manager \
    --from-literal rhc_auth='{"login": {"username": "<subscription_manager_username>", "password": "<subscription_manager_password>"}}'
    Copy to Clipboard Toggle word wrap
    • Replace <subscription_manager_username> with the username you set for subscription-manager.
    • Replace <subscription_manager_password> with the password you set for subscription-manager.
  5. Create a Secret CR that contains the Red Hat registry credentials:

    $ oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"<username>": "<password>"}}'
    Copy to Clipboard Toggle word wrap
    • Replace <username> and <password> with your Red Hat registry username and password credentials.

      For information about how to create your registry service account, see the Knowledge Base article Creating Registry Service Accounts.

  6. If you are creating Compute nodes, create a secret for libvirt.

    1. Create a file on your workstation named secret_libvirt.yaml to define the libvirt secret:

      apiVersion: v1
      kind: Secret
      metadata:
       name: libvirt-secret
       namespace: openstack
      type: Opaque
      data:
       LibvirtPassword: <base64_password>
      Copy to Clipboard Toggle word wrap
      • Replace <base64_password> with a base64-encoded string with maximum length 63 characters. You can use the following command to generate a base64-encoded password:

        $ echo -n <password> | base64
        Copy to Clipboard Toggle word wrap
        Tip

        If you do not want to base64-encode the username and password, you can use the stringData field instead of the data field to set the username and password.

    2. Create the Secret CR:

      $ oc apply -f secret_libvirt.yaml -n openstack
      Copy to Clipboard Toggle word wrap
  7. Verify that the Secret CRs are created:

    $ oc describe secret dataplane-ansible-ssh-private-key-secret
    $ oc describe secret nova-migration-ssh-key
    $ oc describe secret subscription-manager
    $ oc describe secret redhat-registry
    $ oc describe secret libvirt-secret
    Copy to Clipboard Toggle word wrap

To configure the data plane for dynamic routing in your Red Hat OpenStack Services on OpenShift (RHOSO) environment with pre-provisioned nodes, create an OpenStackDataPlaneNodeSet CR for Compute nodes and an OpenStackDataPlaneNodeSet CR for Networker nodes. The Networker nodes contain the OVN gateway chassis.

Define an OpenStackDataPlaneNodeSet custom resource (CR) for the logical grouping of pre-provisioned nodes in your data plane that are Compute nodes. You can define as many Compute node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR. Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1. If you customize your control plane to include additional Compute cells, you must specify the cell to which the node set is connected. For more information on adding Compute cells, see Connecting an OpenStackDataPlaneNodeSet CR to a Compute cell in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.

Important

Currently, in dynamic routing environments, there is a limitation where the RHOSO control plane cannot be distributed. For this reason, you must have dedicated Networker nodes that host the OVN gateway chassis. This limitation will be solved in a future RHOSO release. For more information, see OSPRH-661.

You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodeTemplate.nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.

Tip

For an example OpenStackDataPlaneNodeSet CR that a node set from pre-provisioned Compute nodes, see Example OpenStackDataPlaneNodeSet CR for pre-provisioned nodes.

Procedure

  1. Create a file on your workstation named openstack_compute_node_set.yaml to define the OpenStackDataPlaneNodeSet CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: openstack-compute-nodes 
    1
    
      namespace: openstack
    spec:
      env: 
    2
    
        - name: ANSIBLE_FORCE_COLOR
          value: "True"
    Copy to Clipboard Toggle word wrap
    1
    The OpenStackDataPlaneNodeSet CR name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the nodes in the set.
    2
    Optional: A list of environment variables to pass to the pod.
  2. Connect the Compute nodes on the data plane to the control plane network:

    spec:
      ...
      networkAttachments:
        - ctlplane
    Copy to Clipboard Toggle word wrap
  3. Specify that the nodes in this set are pre-provisioned:

      preProvisioned: true
    Copy to Clipboard Toggle word wrap
  4. Add the SSH key secret that you created to enable Ansible to connect to the Compute nodes on the data plane:

      nodeTemplate:
        ansibleSSHPrivateKeySecret: <secret-key>
    Copy to Clipboard Toggle word wrap
    • Replace <secret-key> with the name of the SSH key Secret CR you created for this node set in Creating the data plane secrets, for example, dataplane-ansible-ssh-private-key-secret.
  5. Create a Persistent Volume Claim (PVC) in the openstack namespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set the volumeMode to Filesystem and accessModes to ReadWriteOnce. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and the ansible-runner creates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment.
  6. Enable persistent logging for the data plane nodes:

      nodeTemplate:
        ...
        extraMounts:
          - extraVolType: Logs
            volumes:
            - name: ansible-logs
              persistentVolumeClaim:
                claimName: <pvc_name>
            mounts:
            - name: ansible-logs
              mountPath: "/runner/artifacts"
    Copy to Clipboard Toggle word wrap
    • Replace <pvc_name> with the name of the PVC storage on your RHOCP cluster.
  7. Specify the management network:

      nodeTemplate:
        ...
        managementNetwork: ctlplane
    Copy to Clipboard Toggle word wrap
  8. Specify the Secret CRs used to source the usernames and passwords to register the operating system of your nodes and to enable repositories. The following example demonstrates how to register your nodes to CDN. For details on how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.

      nodeTemplate:
        ...
        ansible:
          ansibleUser: cloud-admin 
    1
    
          ansiblePort: 22
          ansibleVarsFrom:
            - prefix: subscription_manager_
              secretRef:
                name: subscription-manager
            - prefix: registry_
              secretRef:
                name: redhat-registry
          ansibleVars: 
    2
    
            edpm_bootstrap_command: |
              subscription-manager register --username {{ subscription_manager_username }} --password {{ subscription_manager_password }}
              subscription-manager release --set=9.4
              subscription-manager repos --disable=*
              subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms --enable=rhoso-18.0-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms
            edpm_bootstrap_release_version_package: []
    Copy to Clipboard Toggle word wrap
    1
    The user associated with the secret you created in Creating the data plane secrets.
    2
    The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/.

    For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273. For information about how to log into registry.redhat.io, see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6.

  9. Add the network configuration template to apply to your Compute nodes. The following example applies the single NIC VLANs network configuration to the data plane nodes:

      nodeTemplate:
        ...
        ansible:
          ...
          ansibleVars:
            ...
            edpm_network_config_os_net_config_mappings:
              edpm-compute-0:
                nic1: 52:54:04:60:55:22 
    1
    
            neutron_physical_bridge_name: br-ex
            neutron_public_interface_name: eth0
            edpm_network_config_template: |
              ---
              {% set mtu_list = [ctlplane_mtu] %}
              {% for network in nodeset_networks %}
              {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
              {%- endfor %}
              {% set min_viable_mtu = mtu_list | max %}
              network_config:
              - type: ovs_bridge
                name: {{ neutron_physical_bridge_name }}
                use_dhcp: false
                use_dhcpv6: true
              - type: interface
                name: nic1
                use_dhcp: true
                defroute: false
              - type: interface
                name: nic2
                use_dhcp: false
                defroute: false
                dns_servers: {{ ctlplane_dns_nameservers }}
                domain: {{ dns_search_domains }}
                addresses:
                - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
              - type: interface
                name: nic3
                use_dhcp: false
                addresses:
                - ip_netmask: {{ lookup('vars', 'bgpnet0_ip') }}/30
              - type: interface
                name: nic4
                use_dhcp: false
                addresses:
                - ip_netmask: {{ lookup('vars', 'bgpnet1_ip') }}/30
              - type: interface
                name: lo
                addresses:
                - ip_netmask: {{ lookup('vars', 'bgpmainnet_ip') }}/32
                - ip_netmask: {{ lookup('vars', 'bgpmainnetv6_ip') }}/128
                - ip_netmask: {{ lookup('vars', 'internalapi_ip') }}/32
                - ip_netmask: {{ lookup('vars', 'storage_ip') }}/32
                - ip_netmask: {{ lookup('vars', 'tenant_ip') }}/32
                - ip_netmask: {{ lookup('vars', 'octavia_ip') }}/32
    Copy to Clipboard Toggle word wrap
    1
    Update the nic1 to the MAC address assigned to the NIC to use for network configuration on the Compute node.

    For alternative templates, see roles/edpm_network_config/templates. For more information about data plane network configuration, see Customizing data plane networks in Configuring networking services.

  10. Add the common configuration for the set of Compute nodes in this group under the nodeTemplate section. Each node in this OpenStackDataPlaneNodeSet inherits this configuration:

    Example
    edpm_frr_bgp_ipv4_src_network: bgpmainnet
    edpm_frr_bgp_neighbor_password: f00barZ
    edpm_frr_bgp_uplinks:
    - nic3
    - nic4
    edpm_ovn_encap_ip: '{{ lookup(''vars'', ''bgpmainnet_ip'') }}'
    Copy to Clipboard Toggle word wrap

    For information about the properties you can use to configure common node attributes, see OpenStackDataPlaneNodeSet CR spec properties for dynamic routing.

  11. Define each node in this node set:

      nodes:
        edpm-compute-0: 
    1
    
          hostName: edpm-compute-0
          networks: 
    2
    
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.100 
    3
    
          - name: internalapi
            subnetName: subnet1
            fixedIP: 172.17.0.100
          - name: storage
            subnetName: subnet1
            fixedIP: 172.18.0.100
          - name: tenant
            subnetName: subnet1
            fixedIP: 172.19.0.100
          - name: BgpNet0
            subnetName: subnet0
            fixedIP: 100.64.0.2
          - name: BgpNet1
            subnetName: subnet0
            fixedIP: 100.65.0.2
          - name: BgpMainNet
            subnetName: subnet0
            fixedIP: 172.30.0.2
          ansible:
            ansibleHost: 192.168.122.100
            ansibleUser: cloud-admin
            ansibleVars: 
    4
    
              fqdn_internal_api: edpm-compute-0.example.com
        edpm-compute-1:
          hostName: edpm-compute-1
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.101
          - name: internalapi
            subnetName: subnet1
            fixedIP: 172.17.0.101
          - name: storage
            subnetName: subnet1
            fixedIP: 172.18.0.101
          - name: tenant
            subnetName: subnet1
            fixedIP: 172.19.0.101
          - name: BgpNet0
            subnetName: subnet0
            fixedIP: 100.64.1.2
          - name: BgpNet1
            subnetName: subnet0
            fixedIP: 100.65.1.2
          - name: BgpMainNet
            subnetName: subnet0
            fixedIP: 172.30.1.2
          ansible:
            ansibleHost: 192.168.122.101
            ansibleUser: cloud-admin
            ansibleVars:
              fqdn_internal_api: edpm-compute-1.example.com
    Copy to Clipboard Toggle word wrap
    1
    The node definition reference, for example, edpm-compute-0. Each node in the node set must have a node definition.
    2
    Defines the IPAM and the DNS records for the node.
    3
    Specifies a predictable IP address for the network that must be in the allocation range defined for the network in the NetConfig CR.
    4
    Node-specific Ansible variables that customize the node.
    Note
    • Nodes defined within the nodes section can configure the same Ansible variables that are configured in the nodeTemplate section. Where an Ansible variable is configured for both a specific node and within the nodeTemplate section, the node-specific values override those from the nodeTemplate section.
    • You do not need to replicate all the nodeTemplate Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node.
    • Many ansibleVars include edpm in the name, which stands for "External Data Plane Management".

    For information about the properties you can use to configure node attributes, see OpenStackDataPlaneNodeSet CR properties.

  12. In the services section, ensure that the frr and ovn-bgp-agent services are included:

    Example
    services:
    - download-cache
    - redhat
    - bootstrap
    - configure-network
    - install-os
    - configure-os
    - frr
    - validate-network
    - ssh-known-hosts
    - run-os
    - reboot-os
    - install-certs
    - ovn
    - neutron-metadata
    - ovn-bgp-agent
    - libvirt
    - nova
    Copy to Clipboard Toggle word wrap
  13. Save the openstack_compute_node_set.yaml definition file.
  14. Create the data plane resources:

    $ oc create --save-config -f openstack_compute_node_set.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  15. Verify that the data plane resources have been created by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset openstack-compute-nodes \
    --for condition=SetupReady --timeout=10m
    Copy to Clipboard Toggle word wrap

    When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error.

    For information about the data plane conditions and states, see Data plane conditions and states.

  16. Verify that the Secret resource was created for the node set:

    $ oc get secret | grep openstack-compute-nodes
    dataplanenodeset-openstack-compute-nodes Opaque 1 3m50s
    Copy to Clipboard Toggle word wrap
  17. Verify the services were created:

    $ oc get openstackdataplaneservice -n openstack
    NAME                AGE
    download-cache      46m
    bootstrap           46m
    configure-network   46m
    validate-network    46m
    frr                 46m
    install-os          46m
    ...
    Copy to Clipboard Toggle word wrap

Define an OpenStackDataPlaneNodeSet custom resource (CR) for the logical grouping of pre-provisioned nodes in your data plane that are Networker nodes. You can define as many Networker node sets as necessary for your deployment.

Important

Currently, in dynamic routing environments, there is a limitation where the RHOSO control plane cannot be distributed. For this reason, you must have dedicated Networker nodes that host the OVN gateway chassis. This limitation will be solved in a future RHOSO release. For more information, see OSPRH-661.

You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodeTemplate.nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.

Tip

For an example OpenStackDataPlaneNodeSet CR that a node set from pre-provisioned Networker nodes, see Example OpenStackDataPlaneNodeSet CR for pre-provisioned nodes.

Procedure

  1. Create a file on your workstation named openstack_networker_node_set.yaml to define the OpenStackDataPlaneNodeSet CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: openstack-networker-nodes 
    1
    
      namespace: openstack
    spec:
      env: 
    2
    
        - name: ANSIBLE_FORCE_COLOR
          value: "True"
    Copy to Clipboard Toggle word wrap
    1
    The OpenStackDataPlaneNodeSet CR name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the nodes in the set.
    2
    Optional: A list of environment variables to pass to the pod.
  2. Connect the Networker nodes on the data plane to the control plane network:

    spec:
      ...
      networkAttachments:
        - ctlplane
    Copy to Clipboard Toggle word wrap
  3. Specify that the nodes in this set are pre-provisioned:

      preProvisioned: true
    Copy to Clipboard Toggle word wrap
  4. Add the SSH key secret that you created to enable Ansible to connect to the Networker nodes on the data plane:

      nodeTemplate:
        ansibleSSHPrivateKeySecret: <secret-key>
    Copy to Clipboard Toggle word wrap
    • Replace <secret-key> with the name of the SSH key Secret CR you created for this node set in Creating the data plane secrets, for example, dataplane-ansible-ssh-private-key-secret.
  5. Create a Persistent Volume Claim (PVC) in the openstack namespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set the volumeMode to Filesystem and accessModes to ReadWriteOnce. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and the ansible-runner creates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment.
  6. Enable persistent logging for the data plane nodes:

      nodeTemplate:
        ...
        extraMounts:
          - extraVolType: Logs
            volumes:
            - name: ansible-logs
              persistentVolumeClaim:
                claimName: <pvc_name>
            mounts:
            - name: ansible-logs
              mountPath: "/runner/artifacts"
    Copy to Clipboard Toggle word wrap
    • Replace <pvc_name> with the name of the PVC storage on your RHOCP cluster.
  7. Specify the management network:

      nodeTemplate:
        ...
        managementNetwork: ctlplane
    Copy to Clipboard Toggle word wrap
  8. Specify the Secret CRs used to source the usernames and passwords to register the operating system of your nodes and to enable repositories. The following example demonstrates how to register your nodes to CDN. For details on how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.

      nodeTemplate:
        ...
        ansible:
          ansibleUser: cloud-admin 
    1
    
          ansiblePort: 22
          ansibleVarsFrom:
            - prefix: subscription_manager_
              secretRef:
                name: subscription-manager
            - prefix: registry_
              secretRef:
                name: redhat-registry
          ansibleVars: 
    2
    
            edpm_bootstrap_command: |
              subscription-manager register --username {{ subscription_manager_username }} --password {{ subscription_manager_password }}
              subscription-manager release --set=9.4
              subscription-manager repos --disable=*
              subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms --enable=rhoso-18.0-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms
            edpm_bootstrap_release_version_package: []
    Copy to Clipboard Toggle word wrap
    1
    The user associated with the secret you created in Creating the data plane secrets.
    2
    The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/.

    For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273. For information about how to log into registry.redhat.io, see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6.

  9. Add the network configuration template to apply to your Networker nodes. The following example applies the single NIC VLANs network configuration to the data plane nodes:

      nodeTemplate:
        ...
        ansible:
          ...
          ansibleVars:
            ...
            edpm_network_config_os_net_config_mappings:
              edpm-networker-0:
                nic1: 52:54:04:60:55:22 
    1
    
            neutron_physical_bridge_name: br-ex
            neutron_public_interface_name: eth0
            edpm_network_config_template: |
              ---
              {% set mtu_list = [ctlplane_mtu] %}
              {% for network in nodeset_networks %}
              {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
              {%- endfor %}
              {% set min_viable_mtu = mtu_list | max %}
              network_config:
              - type: ovs_bridge
                name: {{ neutron_physical_bridge_name }}
                use_dhcp: false
                use_dhcpv6: true
              - type: interface
                name: nic1
                use_dhcp: true
                defroute: false
              - type: interface
                name: nic2
                use_dhcp: false
                defroute: false
                dns_servers: {{ ctlplane_dns_nameservers }}
                domain: {{ dns_search_domains }}
                addresses:
                - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
              - type: interface
                name: nic3
                use_dhcp: false
                addresses:
                - ip_netmask: {{ lookup('vars', 'bgpnet0_ip') }}/30
              - type: interface
                name: nic4
                use_dhcp: false
                addresses:
                - ip_netmask: {{ lookup('vars', 'bgpnet1_ip') }}/30
              - type: interface
                name: lo
                addresses:
                - ip_netmask: {{ lookup('vars', 'bgpmainnet_ip') }}/32
                - ip_netmask: {{ lookup('vars', 'bgpmainnetv6_ip') }}/128
                - ip_netmask: {{ lookup('vars', 'internalapi_ip') }}/32
                - ip_netmask: {{ lookup('vars', 'storage_ip') }}/32
                - ip_netmask: {{ lookup('vars', 'tenant_ip') }}/32
                - ip_netmask: {{ lookup('vars', 'octavia_ip') }}/32
    Copy to Clipboard Toggle word wrap
    1
    Update the nic1 to the MAC address assigned to the NIC to use for network configuration on the Compute node.

    For alternative templates, see roles/edpm_network_config/templates. For more information about data plane network configuration, see Customizing data plane networks in the Configuring network services guide.

  10. Add the common configuration for the set of Networker nodes in this group under the nodeTemplate section. Each node in this OpenStackDataPlaneNodeSet inherits this configuration:

    Example
    edpm_frr_bgp_ipv4_src_network: bgpmainnet
    edpm_frr_bgp_neighbor_password: f00barZ
    edpm_frr_bgp_uplinks:
    - nic3
    - nic4
    edpm_ovn_encap_ip: '{{ lookup(''vars'', ''bgpmainnet_ip'') }}'
    Copy to Clipboard Toggle word wrap

    For information about the properties you can use to configure common node attributes, see OpenStackDataPlaneNodeSet CR spec properties for dynamic routing.

  11. Define each node in this node set:

      nodes:
        edpm-networker-0: 
    1
    
          hostName: edpm-networker-0
          networks: 
    2
    
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.100 
    3
    
          - name: internalapi
            subnetName: subnet1
            fixedIP: 172.17.0.100
          - name: storage
            subnetName: subnet1
            fixedIP: 172.18.0.100
          - name: tenant
            subnetName: subnet1
            fixedIP: 172.19.0.100
          - name: BgpNet0
            subnetName: subnet0
            fixedIP: 100.64.0.2
          - name: BgpNet1
            subnetName: subnet0
            fixedIP: 100.65.0.2
          - name: BgpMainNet
            subnetName: subnet0
            fixedIP: 172.30.0.2
          ansible:
            ansibleHost: 192.168.122.100
            ansibleUser: cloud-admin
            ansibleVars: 
    4
    
              fqdn_internal_api: edpm-networker-0.example.com
        edpm-networker-1:
          hostName: edpm-networker-1
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.101
          - name: internalapi
            subnetName: subnet1
            fixedIP: 172.17.0.101
          - name: storage
            subnetName: subnet1
            fixedIP: 172.18.0.101
          - name: tenant
            subnetName: subnet1
            fixedIP: 172.19.0.101
          - name: BgpNet0
            subnetName: subnet0
            fixedIP: 100.64.0.2
          - name: BgpNet1
            subnetName: subnet0
            fixedIP: 100.65.0.2
          - name: BgpMainNet
            subnetName: subnet0
            fixedIP: 172.30.0.2
          ansible:
            ansibleHost: 192.168.122.101
            ansibleUser: cloud-admin
            ansibleVars:
              fqdn_internal_api: edpm-networker-1.example.com
    Copy to Clipboard Toggle word wrap
    1
    The node definition reference, for example, edpm-networker-0. Each node in the node set must have a node definition.
    2
    Defines the IPAM and the DNS records for the node.
    3
    Specifies a predictable IP address for the network that must be in the allocation range defined for the network in the NetConfig CR.
    4
    Node-specific Ansible variables that customize the node.
    Note
    • Nodes defined within the nodes section can configure the same Ansible variables that are configured in the nodeTemplate section. Where an Ansible variable is configured for both a specific node and within the nodeTemplate section, the node-specific values override those from the nodeTemplate section.
    • You do not need to replicate all the nodeTemplate Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node.
    • Many ansibleVars include edpm in the name, which stands for "External Data Plane Management".

    For information about the properties you can use to configure node attributes, see OpenStackDataPlaneNodeSet CR properties.

  12. In the services section, ensure that the frr and ovn-bgp-agent services are included.

    Note

    Do not include the ssh-known-hosts service in this node set because it has already been included in the Compute node set CR. This service is included in only one node set CR because it is a global service.

    Example
    services:
    - download-cache
    - bootstrap
    - configure-network
    - install-os
    - configure-os
    - frr
    - validate-network
    - run-os
    - reboot-os
    - install-certs
    - ovn
    - neutron-metadata
    - ovn-bgp-agent
    Copy to Clipboard Toggle word wrap
  13. Save the openstack_networker_node_set.yaml definition file.
  14. Create the Networker node resources for the data plane:

    $ oc create --save-config -f openstack_networker_node_set.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  15. Verify that the data plane resources have been created by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset openstack-networker-nodes --for condition=SetupReady --timeout=10m
    Copy to Clipboard Toggle word wrap

    When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error.

    For information about the data plane conditions and states, see Data plane conditions and states.

  16. Verify that the Secret resource was created for the node set:

    $ oc get secret | grep openstack-networker-nodes
    dataplanenodeset-openstack-networker-nodes Opaque 1 3m50s
    Copy to Clipboard Toggle word wrap
  17. Verify the services were created:

    $ oc get openstackdataplaneservice -n openstack
    NAME                AGE
    download-cache      46m
    bootstrap           46m
    configure-network   46m
    validate-network    46m
    frr                 46m
    install-os          46m
    ...
    Copy to Clipboard Toggle word wrap

The following example OpenStackDataPlaneNodeSet CR creates a node set from pre-provisioned Compute nodes with some node-specific configuration. The example includes optional fields. Review the example and update the optional fields to the correct values for your environment or remove them before using the example in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.

Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.

apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
  name: openstack-compute-nodes
  namespace: openstack
spec:
  env:
    - name: ANSIBLE_FORCE_COLOR
      value: "True"
  networkAttachments:
    - ctlplane
  preProvisioned: true
  nodeTemplate:
    ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret
    extraMounts:
      - extraVolType: Logs
        volumes:
        - name: ansible-logs
          persistentVolumeClaim:
            claimName: <pvc_name>
        mounts:
        - name: ansible-logs
          mountPath: "/runner/artifacts"
    managementNetwork: ctlplane
    ansible:
      ansibleUser: cloud-admin
      ansiblePort: 22
      ansibleVarsFrom:
        - prefix: subscription_manager_
          secretRef:
            name: subscription-manager
        - prefix: registry_
          secretRef:
            name: redhat-registry
      ansibleVars:
        edpm_bootstrap_command: |
          subscription-manager register --username {{ subscription_manager_username }} --password {{ subscription_manager_password }}
          subscription-manager release --set=9.4
          subscription-manager repos --disable=*
          subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms --enable=rhoso-18.0-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms
        edpm_bootstrap_release_version_package: []
        edpm_network_config_os_net_config_mappings:
          edpm-compute-0:
            nic1: 52:54:04:60:55:22
        neutron_physical_bridge_name: br-ex
        neutron_public_interface_name: eth0
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: ovs_bridge
            name: {{ neutron_physical_bridge_name }}
            mtu: {{ min_viable_mtu }}
            use_dhcp: false
            dns_servers: {{ ctlplane_dns_nameservers }}
            domain: {{ dns_search_domains }}
            addresses:
            - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
            routes: {{ ctlplane_host_routes }}
            members:
            - type: interface
              name: nic1
              mtu: {{ min_viable_mtu }}
              # force the MAC address of the bridge to this interface
              primary: true
          {% for network in nodeset_networks %}
            - type: vlan
              mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
              vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
              addresses:
              - ip_netmask:
                  {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
              routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
          {% endfor %}
  nodes:
    edpm-compute-0:
      hostName: edpm-compute-0
      networks:
      - name: ctlplane
        subnetName: subnet1
        defaultRoute: true
        fixedIP: 192.168.122.100
      - name: internalapi
        subnetName: subnet1
        fixedIP: 172.17.0.100
      - name: storage
        subnetName: subnet1
        fixedIP: 172.18.0.100
      - name: tenant
        subnetName: subnet1
        fixedIP: 172.19.0.100
      ansible:
        ansibleHost: 192.168.122.100
        ansibleUser: cloud-admin
        ansibleVars:
          fqdn_internal_api: edpm-compute-0.example.com
    edpm-compute-1:
      hostName: edpm-compute-1
      networks:
      - name: ctlplane
        subnetName: subnet1
        defaultRoute: true
        fixedIP: 192.168.122.101
      - name: internalapi
        subnetName: subnet1
        fixedIP: 172.17.0.101
      - name: storage
        subnetName: subnet1
        fixedIP: 172.18.0.101
      - name: tenant
        subnetName: subnet1
        fixedIP: 172.19.0.101
      ansible:
        ansibleHost: 192.168.122.101
        ansibleUser: cloud-admin
        ansibleVars:
          fqdn_internal_api: edpm-compute-1.example.com
Copy to Clipboard Toggle word wrap

Configuring the data plane for dynamic routing in your Red Hat OpenStack Services on OpenShift (RHOSO) environment using unprovisioned nodes, consists of:

  1. Creating a BareMetalHost custom resource (CR) for each bare-metal data plane node.
  2. Defining an OpenStackDataPlaneNodeSet CR for Compute nodes and an OpenStackDataPlaneNodeSet CR for Networker nodes. The Networker nodes contain the OVN gateway chassis.

For more information about provisioning bare-metal nodes, see Planning provisioning for bare-metal data plane nodes in Planning your deployment.

Prerequisites

  • Cluster Baremetal Operator (CBO) is installed and configured for provisioning. For more information, see Planning provisioning for bare-metal data plane nodes in Planning your deployment.
  • To provision data plane nodes with PXE network boot, a bare-metal provisioning network must be available in your Red Hat OpenShift Container Platform (RHOCP) cluster.

    Note

    You do not need a provisioning network to provision nodes with virtual media.

  • A Provisioning CR is available in RHOCP. For more information about creating a Provisioning CR, see Configuring a provisioning resource to scale user-provisioned clusters in the Red Hat OpenShift Container Platform (RHOCP) Installing on bare metal guide.

You must create a BareMetalHost custom resource (CR) for each bare-metal data plane node. At a minimum, you must provide the data required to add the bare-metal data plane node on the network so that the remaining installation steps can access the node and perform the configuration.

Note

If you use the ctlplane interface for provisioning and you have rp_filter configured on the kernel to enable Reverse Path Forwarding (RPF), then the reverse path filtering logic drops traffic. For information about how to prevent traffic being dropped because of the RPF filter, see How to prevent asymmetric routing in Deploying Red Hat OpenStack Services on OpenShift.

Procedure

  1. The Bare Metal Operator (BMO) manages BareMetalHost custom resources (CRs) in the openshift-machine-api namespace by default. Update the Provisioning CR to watch all namespaces:

    $ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'
    Copy to Clipboard Toggle word wrap
  2. If you are using virtual media boot for bare-metal data plane nodes and the nodes are not connected to a provisioning network, you must update the Provisioning CR to enable virtualMediaViaExternalNetwork, which enables bare-metal connectivity through the external network:

    $ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"virtualMediaViaExternalNetwork": true }}'
    Copy to Clipboard Toggle word wrap
  3. Create a file on your workstation that defines the Secret CR with the credentials for accessing the Baseboard Management Controller (BMC) of each bare-metal data plane node in the node set:

    apiVersion: v1
    kind: Secret
    metadata:
      name: edpm-compute-0-bmc-secret
      namespace: openstack
    type: Opaque
    data:
      username: <base64_username>
      password: <base64_password>
    Copy to Clipboard Toggle word wrap
    • Replace <base64_username> and <base64_password> with strings that are base64-encoded. You can use the following command to generate a base64-encoded string:

      $ echo -n <string> | base64
      Copy to Clipboard Toggle word wrap
      Tip

      If you don’t want to base64-encode the username and password, you can use the stringData field instead of the data field to set the username and password.

  4. Create a file named bmh_nodes.yaml on your workstation, that defines the BareMetalHost CR for each bare-metal data plane node. The following example creates a BareMetalHost CR with the provisioning method Redfish virtual media:

    apiVersion: metal3.io/v1alpha1
    kind: BareMetalHost
    metadata:
      name: edpm-compute-0
      namespace: openstack
      labels: 
    1
    
        app: openstack
        workload: compute
    spec:
      bmc:
        address: redfish-virtualmedia+http://192.168.111.1:8000/redfish/v1/Systems/e8efd888-f844-4fe0-9e2e-498f4ab7806d 
    2
    
        credentialsName: edpm-compute-0-bmc-secret 
    3
    
      bootMACAddress: 00:c7:e4:a7:e7:f3
      bootMode: UEFI
      online: false
     [preprovisioningNetworkDataName: <network_config_secret_name>] 
    4
    Copy to Clipboard Toggle word wrap
    1
    Metadata labels, such as app, workload, and nodeName are key-value pairs that provide varying levels of granularity for labelling nodes. You can use these labels when you create an OpenStackDataPlaneNodeSet CR to describe the configuration of bare-metal nodes to be provisioned or to define nodes in a node set.
    2
    The URL for communicating with the BMC controller of the node. For information about BMC addressing for other provisioning methods, see BMC addressing in the RHOCP Installing on bare metal guide.
    3
    The name of the Secret CR you created in the previous step for accessing the BMC of the node.
    4
    Optional: The name of the network configuration secret in the local namespace to pass to the pre-provisioning image. The network configuration must be in nmstate format.

    For more information about how to create a BareMetalHost CR, see About the BareMetalHost resource in the RHOCP Installing on bare metal guide.

  5. Create the BareMetalHost resources:

    $ oc create -f bmh_nodes.yaml
    Copy to Clipboard Toggle word wrap
  6. Verify that the BareMetalHost resources have been created and are in the Available state:

    $ oc get bmh
    NAME         STATE            CONSUMER              ONLINE   ERROR   AGE
    edpm-compute-0   Available      openstack-edpm        true             2d21h
    edpm-compute-1   Available      openstack-edpm        true             2d21h
    ...
    Copy to Clipboard Toggle word wrap

Define an OpenStackDataPlaneNodeSet custom resource (CR) for the logical grouping of unprovisioned nodes in your data plane that are Compute nodes. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR. Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1. If you customize your control plane to include additional Compute cells, you must specify the cell to which the node set is connected. For more information on adding Compute cells, see Connecting an OpenStackDataPlaneNodeSet CR to a Compute cell in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.

Important

Currently, in dynamic routing environments, there is a limitation where the RHOSO control plane cannot be distributed. For this reason, you must have dedicated Networker nodes that host the OVN gateway chassis. This limitation will be solved in a future RHOSO release. For more information, see OSPRH-661.

You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodeTemplate.nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.

Tip

For an example OpenStackDataPlaneNodeSet CR that creates a node set from unprovisioned Compute nodes, see Example OpenStackDataPlaneNodeSet CR for unprovisioned nodes.

Prerequisites

Procedure

  1. Create a file on your workstation named openstack_unprovisioned_compute_node_set.yaml to define the OpenStackDataPlaneNodeSet CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: openstack-compute-nodes 
    1
    
      namespace: openstack
    spec:
      tlsEnabled: true
      env: 
    2
    
        - name: ANSIBLE_FORCE_COLOR
          value: "True"
    Copy to Clipboard Toggle word wrap
    1
    The OpenStackDataPlaneNodeSet CR name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), must start and end with an alphanumeric character, and must have a maximum length of 20 characters. Update the name in this example to a name that reflects the nodes in the set.
    2
    Optional: A list of environment variables to pass to the pod.
  2. Connect the Compute nodes data plane to the control plane network:

    spec:
      ...
      networkAttachments:
        - ctlplane
    Copy to Clipboard Toggle word wrap
  3. Specify that the nodes in this set are unprovisioned and must be provisioned when creating the resource:

      preProvisioned: false
    Copy to Clipboard Toggle word wrap
  4. Define the baremetalSetTemplate field to describe the configuration of the bare-metal nodes that must be provisioned when creating the resource:

      baremetalSetTemplate:
        deploymentSSHSecret: dataplane-ansible-ssh-private-key-secret
        bmhNamespace: <bmh_namespace>
        cloudUserName: <ansible_ssh_user>
        bmhLabelSelector:
          app: <bmh_label>
        ctlplaneInterface: <interface>
    Copy to Clipboard Toggle word wrap
    • Replace <bmh_namespace> with the namespace defined in the corresponding BareMetalHost CR for the node, for example, openstack.
    • Replace <ansible_ssh_user> with the username of the Ansible SSH user, for example, cloud-admin.
    • Replace <bmh_label> with the metadata label defined in the corresponding BareMetalHost CR for the node, for example, openstack. Metadata labels, such as app, workload, and nodeName are key-value pairs for labelling nodes. Set the bmhLabelSelector field to select data plane nodes based on one or more labels that match the labels in the corresponding BareMetalHost CR.
    • Replace <interface> with the control plane interface the node connects to, for example, enp6s0.
  5. Add the SSH key secret that you created to enable Ansible to connect to the data plane nodes:

      nodeTemplate:
        ansibleSSHPrivateKeySecret: <secret-key>
    Copy to Clipboard Toggle word wrap
    • Replace <secret-key> with the name of the SSH key Secret CR you created in Creating the data plane secrets, for example, dataplane-ansible-ssh-private-key-secret.
  6. Create a Persistent Volume Claim (PVC) in the openstack namespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set the volumeMode to Filesystem and accessModes to ReadWriteOnce. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and the ansible-runner creates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment.
  7. Enable persistent logging for the data plane nodes:

      nodeTemplate:
        ...
        extraMounts:
          - extraVolType: Logs
            volumes:
            - name: ansible-logs
              persistentVolumeClaim:
                claimName: <pvc_name>
            mounts:
            - name: ansible-logs
              mountPath: "/runner/artifacts"
    Copy to Clipboard Toggle word wrap
    • Replace <pvc_name> with the name of the PVC storage on your RHOCP cluster.
  8. Specify the management network:

      nodeTemplate:
        ...
        managementNetwork: ctlplane
    Copy to Clipboard Toggle word wrap
  9. Specify the Secret CRs used to source the usernames and passwords to register the operating system of your nodes and to enable repositories. The following example demonstrates how to register your nodes to CDN. For details on how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.

      nodeTemplate:
        ansible:
          ansibleUser: cloud-admin 
    1
    
          ansiblePort: 22
          ansibleVarsFrom:
            - prefix: subscription_manager_
              secretRef:
                name: subscription-manager
            - secretRef:
                name: redhat-registry
          ansibleVars: 
    2
    
            edpm_bootstrap_command: |
              subscription-manager register --username {{ subscription_manager_username }} --password {{ subscription_manager_password }}
              subscription-manager release --set=9.4
              subscription-manager repos --disable=*
              subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms --enable=rhoso-18.0-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms
            edpm_bootstrap_release_version_package: []
    Copy to Clipboard Toggle word wrap
    1
    The user associated with the secret you created in Creating the data plane secrets.
    2
    The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/.

    For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273. For information about how to log into registry.redhat.io, see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6.

  10. Add the network configuration template to apply to your Compute nodes. The following example applies the single NIC VLANs network configuration to the data plane nodes:

      nodeTemplate:
        ...
        ansible:
          ...
          ansibleVars:
            ...
            edpm_network_config_os_net_config_mappings:
              edpm-compute-0:
                nic1: 52:54:04:60:55:22 
    1
    
              edpm-compute-1:
                nic1: 52:54:04:60:55:22
            neutron_physical_bridge_name: br-ex
            neutron_public_interface_name: eth0
            edpm_network_config_update: false 
    2
    
            edpm_network_config_template: |
              ---
              {% set mtu_list = [ctlplane_mtu] %}
              {% for network in nodeset_networks %}
              {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
              {%- endfor %}
              {% set min_viable_mtu = mtu_list | max %}
              network_config:
              - type: ovs_bridge
                name: {{ neutron_physical_bridge_name }}
                use_dhcp: false
                use_dhcpv6: true
              - type: interface
                name: nic1
                use_dhcp: true
                defroute: false
              - type: interface
                name: nic2
                use_dhcp: false
                defroute: false
                dns_servers: {{ ctlplane_dns_nameservers }}
                domain: {{ dns_search_domains }}
                addresses:
                - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
              - type: interface
                name: nic3
                use_dhcp: false
                addresses:
                - ip_netmask: {{ lookup('vars', 'bgpnet0_ip') }}/30
              - type: interface
                name: nic4
                use_dhcp: false
                addresses:
                - ip_netmask: {{ lookup('vars', 'bgpnet1_ip') }}/30
              - type: interface
                name: lo
                addresses:
                - ip_netmask: {{ lookup('vars', 'bgpmainnet_ip') }}/32
                - ip_netmask: {{ lookup('vars', 'bgpmainnetv6_ip') }}/128
                - ip_netmask: {{ lookup('vars', 'internalapi_ip') }}/32
                - ip_netmask: {{ lookup('vars', 'storage_ip') }}/32
                - ip_netmask: {{ lookup('vars', 'tenant_ip') }}/32
                - ip_netmask: {{ lookup('vars', 'octavia_ip') }}/32
    Copy to Clipboard Toggle word wrap
    1
    Update the nic1 to the MAC address assigned to the NIC to use for network configuration on the Compute node.
    2
    When deploying a node set for the first time, set the edpm_network_config_update variable to false. When updating or adopting a node set, set edpm_network_config_update to true.
    Important

    After an update or an adoption, you must reset edpm_network_config_update to false. Otherwise, the nodes could lose network access. Whenever edpm_network_config_update is true, the updated network configuration is reapplied every time an OpenStackDataPlaneDeployment CR is created that includes the configure-network service that is a member of the servicesOverride list.

  11. Add the common configuration for the set of Compute nodes in this group under the nodeTemplate section. Each node in this OpenStackDataPlaneNodeSet inherits this configuration:

    Example
    edpm_frr_bgp_ipv4_src_network: bgpmainnet
    edpm_frr_bgp_neighbor_password: f00barZ
    edpm_frr_bgp_uplinks:
    - nic3
    - nic4
    edpm_ovn_encap_ip: '{{ lookup(''vars'', ''bgpmainnet_ip'') }}'
    Copy to Clipboard Toggle word wrap

    For information about the properties you can use to configure common node attributes, see OpenStackDataPlaneNodeSet CR spec properties for dynamic routing.

  12. Define each node in this node set:

      nodes:
        edpm-compute-0: 
    1
    
          hostName: edpm-compute-0
          networks: 
    2
    
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.100 
    3
    
          - name: internalapi
            subnetName: subnet1
            fixedIP: 172.17.0.100
          - name: storage
            subnetName: subnet1
            fixedIP: 172.18.0.100
          - name: tenant
            subnetName: subnet1
            fixedIP: 172.19.0.100
          - name: BgpNet0
            subnetName: subnet0
            fixedIP: 100.64.0.2
          - name: BgpNet1
            subnetName: subnet0
            fixedIP: 100.65.0.2
          - name: BgpMainNet
            subnetName: subnet0
            fixedIP: 172.30.0.2
          ansible:
            ansibleHost: 192.168.122.100
            ansibleUser: cloud-admin
            ansibleVars: 
    4
    
              fqdn_internal_api: edpm-compute-0.example.com
          bmhLabelSelector: 
    5
    
            nodeName: edpm-compute-0
        edpm-compute-1:
          hostName: edpm-compute-1
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.101
          - name: internalapi
            subnetName: subnet1
            fixedIP: 172.17.0.101
          - name: storage
            subnetName: subnet1
            fixedIP: 172.18.0.101
          - name: tenant
            subnetName: subnet1
            fixedIP: 172.19.0.101
          - name: BgpNet0
            subnetName: subnet0
            fixedIP: 100.64.1.2
          - name: BgpNet1
            subnetName: subnet0
            fixedIP: 100.65.1.2
          - name: BgpMainNet
            subnetName: subnet0
            fixedIP: 172.30.1.2
          ansible:
            ansibleHost: 192.168.122.101
            ansibleUser: cloud-admin
            ansibleVars:
              fqdn_internal_api: edpm-compute-1.example.com
          bmhLabelSelector:
            nodeName: edpm-compute-1
    Copy to Clipboard Toggle word wrap
    1
    The node definition reference, for example, edpm-compute-0. Each node in the node set must have a node definition.
    2
    Defines the IPAM and the DNS records for the node.
    3
    Specifies a predictable IP address for the network that must be in the allocation range defined for the network in the NetConfig CR.
    4
    Node-specific Ansible variables that customize the node.
    5
    Optional: Metadata labels, such as app, workload, and nodeName are key-value pairs for labelling nodes. Set the bmhLabelSelector field to select data plane nodes based on one or more labels that match the labels in the corresponding BareMetalHost CR.
    Note
    • Nodes defined within the nodes section can configure the same Ansible variables that are configured in the nodeTemplate section. Where an Ansible variable is configured for both a specific node and within the nodeTemplate section, the node-specific values override those from the nodeTemplate section.
    • You do not need to replicate all the nodeTemplate Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node.
    • Many ansibleVars include edpm in the name, which stands for "External Data Plane Management".

    For information about the properties you can use to configure node attributes, see OpenStackDataPlaneNodeSet CR properties.

  13. In the services section, ensure that the frr and ovn-bgp-agent services are included:

    Example
    services:
    - download-cache
    - redhat
    - bootstrap
    - configure-network
    - install-os
    - configure-os
    - frr
    - validate-network
    - ssh-known-hosts
    - run-os
    - reboot-os
    - install-certs
    - ovn
    - neutron-metadata
    - ovn-bgp-agent
    - libvirt
    - nova
    Copy to Clipboard Toggle word wrap
  14. Save the openstack_unprovisioned_compute_node_set.yaml definition file.
  15. Create the data plane resources:

    $ oc create --save-config -f openstack_unprovisioned_compute_node_set.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  16. Verify that the data plane resources have been created by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset openstack-compute-nodes --for condition=SetupReady --timeout=10m
    Copy to Clipboard Toggle word wrap

    When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error.

    For information about the data plane conditions and states, see Data plane conditions and states.

  17. Verify that the Secret resource was created for the node set:

    $ oc get secret -n openstack | grep openstack-compute-nodes
    dataplanenodeset-openstack-compute-nodes Opaque 1 3m50s
    Copy to Clipboard Toggle word wrap
  18. Verify that the nodes have transitioned to the provisioned state:

    $ oc get bmh
    NAME            STATE         CONSUMER                  ONLINE   ERROR   AGE
    edpm-compute-0  provisioned   openstack-compute-nodes   true             3d21h
    Copy to Clipboard Toggle word wrap
  19. Verify that the services were created:

    $ oc get openstackdataplaneservice -n openstack
    NAME                AGE
    download-cache      8m40s
    bootstrap           8m40s
    configure-network   8m40s
    validate-network    8m40s
    frr                 8m40s
    install-os          8m40s
    ...
    Copy to Clipboard Toggle word wrap

Define an OpenStackDataPlaneNodeSet custom resource (CR) for the logical grouping of pre-provisioned nodes in your data plane that are Networker nodes. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR.

Important

Currently, in dynamic routing environments, there is a limitation where the RHOSO control plane cannot be distributed. For this reason, you must have dedicated Networker nodes that host the OVN gateway chassis. This limitation will be solved in a future RHOSO release. For more information, see OSPRH-661.

You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodeTemplate.nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.

Tip

For an example OpenStackDataPlaneNodeSet CR that creates a node set from unprovisioned Networker nodes, see Example OpenStackDataPlaneNodeSet CR for unprovisioned nodes.

Prerequisites

Procedure

  1. Create a file on your workstation named openstack_unprovisioned_networker_node_set.yaml to define the OpenStackDataPlaneNodeSet CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: openstack-networker-nodes 
    1
    
      namespace: openstack
    spec:
      tlsEnabled: true
      env: 
    2
    
        - name: ANSIBLE_FORCE_COLOR
          value: "True"
    Copy to Clipboard Toggle word wrap
    1
    The OpenStackDataPlaneNodeSet CR name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), must start and end with an alphanumeric character, and must have a maximum length of 20 characters. Update the name in this example to a name that reflects the nodes in the set.
    2
    Optional: A list of environment variables to pass to the pod.
  2. Connect the Networker nodes on the data plane to the control plane network:

    spec:
      ...
      networkAttachments:
        - ctlplane
    Copy to Clipboard Toggle word wrap
  3. Specify that the nodes in this set are unprovisioned and must be provisioned when creating the resource:

      preProvisioned: false
    Copy to Clipboard Toggle word wrap
  4. Define the baremetalSetTemplate field to describe the configuration of the bare-metal nodes that must be provisioned when creating the resource:

      baremetalSetTemplate:
        deploymentSSHSecret: dataplane-ansible-ssh-private-key-secret
        bmhNamespace: <bmh_namespace>
        cloudUserName: <ansible_ssh_user>
        bmhLabelSelector:
          app: <bmh_label>
        ctlplaneInterface: <interface>
    Copy to Clipboard Toggle word wrap
    • Replace <bmh_namespace> with the namespace defined in the corresponding BareMetalHost CR for the node, for example, openstack.
    • Replace <ansible_ssh_user> with the username of the Ansible SSH user, for example, cloud-admin.
    • Replace <bmh_label> with the metadata label defined in the corresponding BareMetalHost CR for the node, for example, openstack. Metadata labels, such as app, workload, and nodeName are key-value pairs for labelling nodes. Set the bmhLabelSelector field to select data plane nodes based on one or more labels that match the labels in the corresponding BareMetalHost CR.
    • Replace <interface> with the control plane interface the node connects to, for example, enp6s0.
  5. Add the SSH key secret that you created to enable Ansible to connect to the data plane nodes:

      nodeTemplate:
        ansibleSSHPrivateKeySecret: <secret-key>
    Copy to Clipboard Toggle word wrap
    • Replace <secret-key> with the name of the SSH key Secret CR you created in Creating the data plane secrets, for example, dataplane-ansible-ssh-private-key-secret.
  6. Create a Persistent Volume Claim (PVC) in the openstack namespace on your RHOCP cluster to store logs. Set the volumeMode to Filesystem and accessModes to ReadWriteOnce. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment.
  7. Enable persistent logging for the data plane nodes:

      nodeTemplate:
        ...
        extraMounts:
          - extraVolType: Logs
            volumes:
            - name: ansible-logs
              persistentVolumeClaim:
                claimName: <pvc_name>
            mounts:
            - name: ansible-logs
              mountPath: "/runner/artifacts"
    Copy to Clipboard Toggle word wrap
    • Replace <pvc_name> with the name of the PVC storage on your RHOCP cluster.
  8. Specify the management network:

      nodeTemplate:
        ...
        managementNetwork: ctlplane
    Copy to Clipboard Toggle word wrap
  9. Specify the Secret CRs used to source the usernames and passwords to register the operating system of your nodes and to enable repositories. The following example demonstrates how to register your nodes to CDN. For details on how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.

      nodeTemplate:
        ansible:
          ansibleUser: cloud-admin 
    1
    
          ansiblePort: 22
          ansibleVarsFrom:
            - prefix: subscription_manager_
              secretRef:
                name: subscription-manager
            - secretRef:
                name: redhat-registry
          ansibleVars: 
    2
    
            edpm_bootstrap_command: |
              subscription-manager register --username {{ subscription_manager_username }} --password {{ subscription_manager_password }}
              subscription-manager release --set=9.4
              subscription-manager repos --disable=*
              subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms --enable=rhoso-18.0-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms
            edpm_bootstrap_release_version_package: []
    Copy to Clipboard Toggle word wrap
    1
    The user associated with the secret you created in Creating the data plane secrets.
    2
    The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/.

    For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273. For information about how to log into registry.redhat.io, see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6.

  10. Add the network configuration template to apply to your Networker nodes. The following example applies the single NIC VLANs network configuration to the data plane nodes:

      nodeTemplate:
        ...
        ansible:
          ...
          ansibleVars:
            ...
            edpm_network_config_os_net_config_mappings:
              edpm-compute-0:
                nic1: 52:54:04:60:55:22 
    1
    
              edpm-compute-1:
                nic1: 52:54:04:60:55:22
            neutron_physical_bridge_name: br-ex
            neutron_public_interface_name: eth0
            edpm_network_config_update: false 
    2
    
            edpm_network_config_template: |
              ---
              {% set mtu_list = [ctlplane_mtu] %}
              {% for network in nodeset_networks %}
              {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
              {%- endfor %}
              {% set min_viable_mtu = mtu_list | max %}
              network_config:
              - type: ovs_bridge
                name: {{ neutron_physical_bridge_name }}
                use_dhcp: false
                use_dhcpv6: true
              - type: interface
                name: nic1
                use_dhcp: true
                defroute: false
              - type: interface
                name: nic2
                use_dhcp: false
                defroute: false
                dns_servers: {{ ctlplane_dns_nameservers }}
                domain: {{ dns_search_domains }}
                addresses:
                - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
              - type: interface
                name: nic3
                use_dhcp: false
                addresses:
                - ip_netmask: {{ lookup('vars', 'bgpnet0_ip') }}/30
              - type: interface
                name: nic4
                use_dhcp: false
                addresses:
                - ip_netmask: {{ lookup('vars', 'bgpnet1_ip') }}/30
              - type: interface
                name: lo
                addresses:
                - ip_netmask: {{ lookup('vars', 'bgpmainnet_ip') }}/32
                - ip_netmask: {{ lookup('vars', 'bgpmainnetv6_ip') }}/128
                - ip_netmask: {{ lookup('vars', 'internalapi_ip') }}/32
                - ip_netmask: {{ lookup('vars', 'storage_ip') }}/32
                - ip_netmask: {{ lookup('vars', 'tenant_ip') }}/32
                - ip_netmask: {{ lookup('vars', 'octavia_ip') }}/32
    Copy to Clipboard Toggle word wrap
    1
    Update the nic1 to the MAC address assigned to the NIC to use for network configuration on the Networker node.
    2
    When deploying a node set for the first time, set the edpm_network_config_update variable to false. When updating or adopting a node set, set edpm_network_config_update to true.
    Important

    After an update or an adoption, you must reset edpm_network_config_update to false. Otherwise, the nodes could lose network access. Whenever edpm_network_config_update is true, the updated network configuration is reapplied every time an OpenStackDataPlaneDeployment CR is created that includes the configure-network service that is a member of the servicesOverride list.

  11. Add the common configuration for the set of Networker nodes in this group under the nodeTemplate section. Each node in this OpenStackDataPlaneNodeSet inherits this configuration:

    Example
    edpm_frr_bgp_ipv4_src_network: bgpmainnet
    edpm_frr_bgp_neighbor_password: f00barZ
    edpm_frr_bgp_uplinks:
    - nic3
    - nic4
    edpm_ovn_encap_ip: '{{ lookup(''vars'', ''bgpmainnet_ip'') }}'
    Copy to Clipboard Toggle word wrap

    For more information about data plane network configuration, see Customizing data plane networks in Configuring network services.

  12. Define each node in this node set:

      nodes:
        edpm-networker-0: 
    1
    
          hostName: edpm-networker-0
          networks: 
    2
    
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.100 
    3
    
          - name: internalapi
            subnetName: subnet1
            fixedIP: 172.17.0.100
          - name: storage
            subnetName: subnet1
            fixedIP: 172.18.0.100
          - name: tenant
            subnetName: subnet1
            fixedIP: 172.19.0.100
          - name: BgpNet0
            subnetName: subnet0
            fixedIP: 100.64.0.2
          - name: BgpNet1
            subnetName: subnet0
            fixedIP: 100.65.0.2
          - name: BgpMainNet
            subnetName: subnet0
            fixedIP: 172.30.0.2
          ansible:
            ansibleHost: 192.168.122.100
            ansibleUser: cloud-admin
            ansibleVars: 
    4
    
              fqdn_internal_api: edpm-networker-0.example.com
          bmhLabelSelector: 
    5
    
            nodeName: edpm-networker-0
        edpm-networker-1:
          hostName: edpm-networker-1
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.101
          - name: internalapi
            subnetName: subnet1
            fixedIP: 172.17.0.101
          - name: storage
            subnetName: subnet1
            fixedIP: 172.18.0.101
          - name: tenant
            subnetName: subnet1
            fixedIP: 172.19.0.101
          - name: BgpNet0
            subnetName: subnet0
            fixedIP: 100.64.0.2
          - name: BgpNet1
            subnetName: subnet0
            fixedIP: 100.65.0.2
          - name: BgpMainNet
            subnetName: subnet0
            fixedIP: 172.30.0.2
          ansible:
            ansibleHost: 192.168.122.101
            ansibleUser: cloud-admin
            ansibleVars:
              fqdn_internal_api: edpm-networker-1.example.com
          bmhLabelSelector:
            nodeName: edpm-networker-1
    Copy to Clipboard Toggle word wrap
    1
    The node definition reference, for example, edpm-compute-0. Each node in the node set must have a node definition.
    2
    Defines the IPAM and the DNS records for the node.
    3
    Specifies a predictable IP address for the network that must be in the allocation range defined for the network in the NetConfig CR.
    4
    Node-specific Ansible variables that customize the node.
    5
    Optional: Metadata labels, such as app, workload, and nodeName are key-value pairs for labelling nodes. Set the bmhLabelSelector field to select data plane nodes based on one or more labels that match the labels in the corresponding BareMetalHost CR.
    Note
    • Nodes defined within the nodes section can configure the same Ansible variables that are configured in the nodeTemplate section. Where an Ansible variable is configured for both a specific node and within the nodeTemplate section, the node-specific values override those from the nodeTemplate section.
    • You do not need to replicate all the nodeTemplate Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node.
    • Many ansibleVars include edpm in the name, which stands for "External Data Plane Management".

    For information about the properties you can use to configure node attributes, see OpenStackDataPlaneNodeSet CR properties.

  13. In the services section, ensure that the frr and ovn-bgp-agent services are included.

    Note

    Do not include the ssh-known-hosts service in this node set because it has already been included in the Compute node set CR. This service is included in only one node set CR because it is a global service.

    Example
    services:
    - download-cache
    - redhat
    - bootstrap
    - configure-network
    - install-os
    - configure-os
    - frr
    - validate-network
    - run-os
    - reboot-os
    - install-certs
    - ovn
    - neutron-metadata
    - ovn-bgp-agent
    Copy to Clipboard Toggle word wrap
  14. Save the openstack_unprovisioned_networker_node_set.yaml definition file.
  15. Create the data plane resources:

    $ oc create --save-config -f openstack_unprovisioned_networker_node_set.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  16. Verify that the data plane resources have been created by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset openstack-networker-nodes --for condition=SetupReady --timeout=10m
    Copy to Clipboard Toggle word wrap

    When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error.

    For information about the data plane conditions and states, see Data plane conditions and states.

  17. Verify that the Secret resource was created for the node set:

    $ oc get secret -n openstack | grep openstack-networker-nodes
    dataplanenodeset-openstack-networker-nodes Opaque 1 3m50s
    Copy to Clipboard Toggle word wrap
  18. Verify that the nodes have transitioned to the provisioned state:

    $ oc get bmh
    NAME            STATE         CONSUMER                    ONLINE   ERROR   AGE
    edpm-compute-0  provisioned   openstack-networker-nodes   true             3d21h
    Copy to Clipboard Toggle word wrap
  19. Verify that the services were created:

    $ oc get openstackdataplaneservice -n openstack
    NAME                AGE
    download-cache      9m17s
    bootstrap           9m17s
    configure-network   9m17s
    validate-network    9m17s
    frr                 9m17s
    install-os          9m17s
    ...
    Copy to Clipboard Toggle word wrap

The following example OpenStackDataPlaneNodeSet CR creates a node set from unprovisioned Compute nodes with some node-specific configuration. The unprovisioned Compute nodes are provisioned when the node set is created. Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the nodes in the set.

apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
  name: openstack-compute-nodes
  namespace: openstack
spec:
  env:
    - name: ANSIBLE_FORCE_COLOR
      value: "True"
  networkAttachments:
    - ctlplane
  preProvisioned: false
  baremetalSetTemplate:
    deploymentSSHSecret: dataplane-ansible-ssh-private-key-secret
    bmhNamespace: openstack
    cloudUserName: cloud-admin
    bmhLabelSelector:
      app: openstack
    ctlplaneInterface: enp1s0
  nodeTemplate:
    ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret
    extraMounts:
      - extraVolType: Logs
        volumes:
        - name: ansible-logs
          persistentVolumeClaim:
            claimName: <pvc_name>
        mounts:
        - name: ansible-logs
          mountPath: "/runner/artifacts"
    managementNetwork: ctlplane
    ansible:
      ansibleUser: cloud-admin
      ansiblePort: 22
      ansibleVarsFrom:
        - prefix: subscription_manager_
          secretRef:
            name: subscription-manager
        - secretRef:
            name: redhat-registry
      ansibleVars:
        edpm_bootstrap_command: |
          subscription-manager register --username {{ subscription_manager_username }} --password {{ subscription_manager_password }}
          subscription-manager release --set=9.4
          subscription-manager repos --disable=*
          subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms --enable=rhoso-18.0-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms
        edpm_bootstrap_release_version_package: []
        edpm_network_config_os_net_config_mappings:
          edpm-compute-0:
            nic1: 52:54:04:60:55:22
          edpm-compute-1:
            nic1: 52:54:04:60:55:22
        neutron_physical_bridge_name: br-ex
        neutron_public_interface_name: eth0
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: ovs_bridge
            name: {{ neutron_physical_bridge_name }}
            mtu: {{ min_viable_mtu }}
            use_dhcp: false
            dns_servers: {{ ctlplane_dns_nameservers }}
            domain: {{ dns_search_domains }}
            addresses:
            - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
            routes: {{ ctlplane_host_routes }}
            members:
            - type: interface
              name: nic1
              mtu: {{ min_viable_mtu }}
              # force the MAC address of the bridge to this interface
              primary: true
          {% for network in nodeset_networks %}
            - type: vlan
              mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
              vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
              addresses:
              - ip_netmask:
                  {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
              routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
          {% endfor %}
  nodes:
    edpm-compute-0:
      hostName: edpm-compute-0
      networks:
      - name: ctlplane
        subnetName: subnet1
        defaultRoute: true
        fixedIP: 192.168.122.100
      - name: internalapi
        subnetName: subnet1
      - name: storage
        subnetName: subnet1
      - name: tenant
        subnetName: subnet1
      ansible:
        ansibleHost: 192.168.122.100
        ansibleUser: cloud-admin
        ansibleVars:
          fqdn_internal_api: edpm-compute-0.example.com
    edpm-compute-1:
      hostName: edpm-compute-1
      networks:
      - name: ctlplane
        subnetName: subnet1
        defaultRoute: true
        fixedIP: 192.168.122.101
      - name: internalapi
        subnetName: subnet1
      - name: storage
        subnetName: subnet1
      - name: tenant
        subnetName: subnet1
      ansible:
        ansibleHost: 192.168.122.101
        ansibleUser: cloud-admin
        ansibleVars:
          fqdn_internal_api: edpm-compute-1.example.com
Copy to Clipboard Toggle word wrap

The following sections detail the OpenStackDataPlaneNodeSet CR spec properties you can configure.

7.5.1. nodeTemplate

Defines the common attributes for the nodes in this OpenStackDataPlaneNodeSet. You can override these common attributes in the definition for each individual node.

Expand
Table 7.1. nodeTemplate properties
FieldDescription

ansibleSSHPrivateKeySecret

Name of the private SSH key secret that contains the private SSH key for connecting to nodes.

Secret name format: Secret.data.ssh-privatekey

For more information, see Creating an SSH authentication secret.

Default: dataplane-ansible-ssh-private-key-secret

edpm_frr_bgp_ipv4_src_network

The main IPv4 network used by the OVN BGP agent to communicate with FRRounting (FRR) on the RHOSO data plane.

edpm_frr_bgp_ipv6_src_network

The main IPv6 network used by the OVN BGP agent to communicate with FRR on the RHOSO data plane.

edpm_frr_bgp_neighbor_password

The password used to authenticate with the BGP peer.

edpm_frr_bgp_uplinks

The list of network interfaces used to communicate with the respective BGP peers, for example, nic3 and nic4.

edpm_ovn_bgp_agent_expose_tenant_networks

When set to true, tenant networks are exposed to the OVN BGP agent. The default is false.

edpm_ovn_encap_ip

The IP address that overrides the default IP address used to establish Geneve tunnels between Compute nodes and OVN controllers. The default value for edpm_ovn_encap_ip uses the the tenant network IP address that is assigned to the Compute node. In the following example, an IP address from a network called bgpmainnet overrides the default. The bgpmainnet network is configured on the loopback interface, the interface that BGP advertises: edpm_ovn_encap_ip: '{{ lookup(''vars'', ''bgpmainnet_ip'') }}'.

managementNetwork

Name of the network to use for management (SSH/Ansible). Default: ctlplane

networks

Network definitions for the OpenStackDataPlaneNodeSet.

ansible

Ansible configuration options. For more information, see ansible properties.

extraMounts

The files to mount into an Ansible Execution Pod.

userData

UserData configuration for the OpenStackDataPlaneNodeSet.

networkData

NetworkData configuration for the OpenStackDataPlaneNodeSet.

7.5.2. nodes

Defines the node names and node-specific attributes for the nodes in this OpenStackDataPlaneNodeSet. Overrides the common attributes defined in the nodeTemplate.

Expand
Table 7.2. nodes properties
FieldDescription

ansible

Ansible configuration options. For more information, see ansible properties.

edpm_frr_bgp_peers

  • 100.64.0.5
  • 100.65.0.5

edpm_ovn_bgp_agent_local_ovn_peer_ips

  • 100.64.0.5
  • 100.65.0.5

extraMounts

The files to mount into an Ansible Execution Pod.

hostName

The node name.

managementNetwork

Name of the network to use for management (SSH/Ansible).

networkData

NetworkData configuration for the node.

networks

Instance networks.

userData

Node-specific user data.

7.5.3. ansible

Defines the group of Ansible configuration options.

Expand
Table 7.3. ansible properties
FieldDescription

ansibleUser

The user associated with the secret you created in Creating the data plane secrets. Default: rhel-user

ansibleHost

SSH host for the Ansible connection.

ansiblePort

SSH port for the Ansible connection.

ansibleVars

The Ansible variables that customize the set of nodes. You can use this property to configure any custom Ansible variable, including the Ansible variables available for each edpm-ansible role. For a complete list of Ansible variables by role, see the edpm-ansible documentation.

Note

The ansibleVars parameters that you can configure for an OpenStackDataPlaneNodeSet CR are determined by the services defined for the OpenStackDataPlaneNodeSet. The OpenStackDataPlaneService CRs call the Ansible playbooks from the edpm-ansible playbook collection, which include the roles that are executed as part of the data plane service.

ansibleVarsFrom

A list of sources to populate Ansible variables from. Values defined by an AnsibleVars with a duplicate key take precedence. For more information, see ansibleVarsFrom properties.

7.5.4. ansibleVarsFrom

Defines the list of sources to populate Ansible variables from.

Expand
Table 7.4. ansibleVarsFrom properties
FieldDescription

prefix

An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.

configMapRef

The ConfigMap CR to select the ansibleVars from.

secretRef

The Secret CR to select the ansibleVars from.

7.6. Deploying the data plane for dynamic routing

You use the OpenStackDataPlaneDeployment CRD to configure the services on the data plane nodes and deploy the data plane for dynamic routing in your Red Hat OpenStack Services on OpenShift (RHOSO) environment. You control the execution of Ansible on the data plane by creating OpenStackDataPlaneDeployment custom resources (CRs). Each OpenStackDataPlaneDeployment CR models a single Ansible execution. When the OpenStackDataPlaneDeployment successfully completes execution, it does not automatically execute the Ansible again, even if the OpenStackDataPlaneDeployment or related OpenStackDataPlaneNodeSet resources are changed. To start another Ansible execution, you must create another OpenStackDataPlaneDeployment CR.

Create an OpenStackDataPlaneDeployment (CR) that deploys each of your OpenStackDataPlaneNodeSet CRs.

Procedure

  1. Create a file on your workstation named openstack_data_plane_deploy.yaml to define the OpenStackDataPlaneDeployment CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: data-plane-deploy 
    1
    
      namespace: openstack
    Copy to Clipboard Toggle word wrap
    1
    The OpenStackDataPlaneDeployment CR name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the node sets in the deployment.
  2. Add the OpenStackDataPlaneNodeSet CRs that you have created for the Compute and Networker nodes:

    spec:
      nodeSets:
        - openstack-compute-nodes
        - openstack-networker-nodes
    Copy to Clipboard Toggle word wrap
  3. Save the openstack_data_plane_deploy.yaml deployment file.
  4. Deploy the data plane:

    $ oc create -f openstack_data_plane_deploy.yaml -n openstack
    Copy to Clipboard Toggle word wrap

    You can view the Ansible logs while the deployment executes:

    $ oc get pod -l app=openstackansibleee -w
    $ oc logs -l app=openstackansibleee -f --max-log-requests 10
    Copy to Clipboard Toggle word wrap

    If the oc logs command returns an error similar to the following error, increase the --max-log-requests value:

    error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
    Copy to Clipboard Toggle word wrap
  5. Verify that the data plane is deployed:

    $ oc get openstackdataplanedeployment -n openstack
    NAME             	STATUS   MESSAGE
    data-plane-deploy   True     Setup Complete
    
    $ oc get openstackdataplanenodeset -n openstack
    NAME                      STATUS MESSAGE
    openstack-compute-nodes   True   NodeSet Ready
    openstack-networker-nodes True   NodeSet Ready
    Copy to Clipboard Toggle word wrap

    For information about the meaning of the returned status, see Data plane conditions and states.

    If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment.

  6. Map the Compute nodes to the Compute cell that they are connected to:

    $ oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose
    Copy to Clipboard Toggle word wrap

    If you did not create additional cells, this command maps the Compute nodes to cell1.

  7. Access the remote shell for the openstackclient pod and verify that the deployed Compute nodes are visible on the control plane:

    $ oc rsh -n openstack openstackclient
    $ openstack hypervisor list
    Copy to Clipboard Toggle word wrap

7.7. Data plane conditions and states

Each data plane resource has a series of conditions within their status subresource that indicates the overall state of the resource, including its deployment progress.

For an OpenStackDataPlaneNodeSet, until an OpenStackDataPlaneDeployment has been started and finished successfully, the Ready condition is False. When the deployment succeeds, the Ready condition is set to True. A subsequent deployment sets the Ready condition to False until the deployment succeeds, when the Ready condition is set to True.

Expand
Table 7.5. OpenStackDataPlaneNodeSet CR conditions
ConditionDescription

Ready

  • "True": The OpenStackDataPlaneNodeSet CR is successfully deployed.
  • "False": The deployment is not yet requested or has failed, or there are other failed conditions.

SetupReady

"True": All setup tasks for a resource are complete. Setup tasks include verifying the SSH key secret, verifying other fields on the resource, and creating the Ansible inventory for each resource. Each service-specific condition is set to "True" when that service completes deployment. You can check the service conditions to see which services have completed their deployment, or which services failed.

DeploymentReady

"True": The NodeSet has been successfully deployed.

InputReady

"True": The required inputs are available and ready.

NodeSetDNSDataReady

"True": DNSData resources are ready.

NodeSetIPReservationReady

"True": The IPSet resources are ready.

NodeSetBaremetalProvisionReady

"True": Bare-metal nodes are provisioned and ready.

Expand
Table 7.6. OpenStackDataPlaneNodeSet status fields
Status fieldDescription

Deployed

  • "True": The OpenStackDataPlaneNodeSet CR is successfully deployed.
  • "False": The deployment is not yet requested or has failed, or there are other failed conditions.

DNSClusterAddresses

 

CtlplaneSearchDomain

 
Expand
Table 7.7. OpenStackDataPlaneDeployment CR conditions
ConditionDescription

Ready

  • "True": The data plane is successfully deployed.
  • "False": The data plane deployment failed, or there are other failed conditions.

DeploymentReady

"True": The data plane is successfully deployed.

InputReady

"True": The required inputs are available and ready.

<NodeSet> Deployment Ready

"True": The deployment has succeeded for the named NodeSet, indicating all services for the NodeSet have succeeded.

<NodeSet> <Service> Deployment Ready

"True": The deployment has succeeded for the named NodeSet and Service. Each <NodeSet> <Service> Deployment Ready specific condition is set to "True" as that service completes successfully for the named NodeSet. Once all services are complete for a NodeSet, the <NodeSet> Deployment Ready condition is set to "True". The service conditions indicate which services have completed their deployment, or which services failed and for which NodeSets.

Expand
Table 7.8. OpenStackDataPlaneDeployment status fields
Status fieldDescription

Deployed

  • "True": The data plane is successfully deployed. All Services for all NodeSets have succeeded.
  • "False": The deployment is not yet requested or has failed, or there are other failed conditions.
Expand
Table 7.9. OpenStackDataPlaneService CR conditions
ConditionDescription

Ready

"True": The service has been created and is ready for use. "False": The service has failed to be created.

To troubleshoot a deployment when services are not deploying or operating correctly, you can check the job condition message for the service, and you can check the logs for a node set.

Each data plane deployment in the environment has associated services. Each of these services has a job condition message that matches the current status of the AnsibleEE job executing for that service. You can use this information to troubleshoot deployments when services are not deploying or operating correctly.

Procedure

  1. Determine the name and status of all deployments:

    $ oc get openstackdataplanedeployment
    Copy to Clipboard Toggle word wrap

    The following example output shows two deployments currently in progress:

    $ oc get openstackdataplanedeployment
    
    NAME              NODESETS                       STATUS  MESSAGE
    data-plane-deploy ["openstack-compute-nodes"]    False   Deployment in progress
    data-plane-deploy ["openstack-networker-nodes"]  False   Deployment in progress
    Copy to Clipboard Toggle word wrap
  2. Determine the name and status of all services and their job condition:

    $ oc get openstackansibleee
    Copy to Clipboard Toggle word wrap

    The following example output shows all services and their job condition for all current deployments:

    $ oc get openstackansibleee
    
    NAME                             NETWORKATTACHMENTS   STATUS   MESSAGE
    bootstrap-openstack-edpm         ["ctlplane"]         True     Job complete
    download-cache-openstack-edpm    ["ctlplane"]         False    Job is running
    repo-setup-openstack-edpm        ["ctlplane"]         True     Job complete
    validate-network-another-osdpd   ["ctlplane"]         False    Job is running
    Copy to Clipboard Toggle word wrap

    For information on the job condition messages, see Job condition messages.

  3. Filter for the name and service for a specific deployment:

    $ oc get openstackansibleee -l \
    openstackdataplanedeployment=<deployment_name>
    Copy to Clipboard Toggle word wrap
    • Replace <deployment_name> with the name of the deployment to use to filter the services list.

      The following example filters the list to only show services and their job condition for the data-plane-deploy deployment:

      $ oc get openstackansibleee -l \
      openstackdataplanedeployment=data-plane-deploy
      
      NAME                            NETWORKATTACHMENTS   STATUS   MESSAGE
      bootstrap-openstack-edpm        ["ctlplane"]         True     Job complete
      download-cache-openstack-edpm   ["ctlplane"]         False    Job is running
      repo-setup-openstack-edpm       ["ctlplane"]         True     Job complete
      Copy to Clipboard Toggle word wrap

7.8.1.1. Job condition messages

AnsibleEE jobs have an associated condition message that indicates the current state of the service job. This condition message is displayed in the MESSAGE field of the oc get job <job_name> command output. Jobs return one of the following conditions when queried:

  • Job not started: The job has not started.
  • Job not found: The job could not be found.
  • Job is running: The job is currently running.
  • Job complete: The job execution is complete.
  • Job error occurred <error_message>: The job stopped executing unexpectedly. The <error_message> is replaced with a specific error message.

To further investigate a service that is displaying a particular job condition message, view its logs by using the command oc logs job/<service>. For example, to view the logs for the repo-setup-openstack-edpm service, use the command oc logs job/repo-setup-openstack-edpm.

7.8.2. Checking the logs for a node set

You can access the logs for a node set to check for deployment issues.

Procedure

  1. Retrieve pods with the OpenStackAnsibleEE label:

    $ oc get pods -l app=openstackansibleee
    configure-network-edpm-compute-j6r4l   0/1     Completed           0          3m36s
    validate-network-edpm-compute-6g7n9    0/1     Pending             0          0s
    validate-network-edpm-compute-6g7n9    0/1     ContainerCreating   0          11s
    validate-network-edpm-compute-6g7n9    1/1     Running             0          13s
    Copy to Clipboard Toggle word wrap
  2. SSH into the pod you want to check:

    1. Pod that is running:

      $ oc rsh validate-network-edpm-compute-6g7n9
      Copy to Clipboard Toggle word wrap
    2. Pod that is not running:

      $ oc debug configure-network-edpm-compute-j6r4l
      Copy to Clipboard Toggle word wrap
  3. List the directories in the /runner/artifacts mount:

    $ ls /runner/artifacts
    configure-network-edpm-compute
    validate-network-edpm-compute
    Copy to Clipboard Toggle word wrap
  4. View the stdout for the required artifact:

    $ cat /runner/artifacts/configure-network-edpm-compute/stdout
    Copy to Clipboard Toggle word wrap
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat