Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 6. Creating the data plane with a routed spine-leaf network topology


The Red Hat OpenStack Services on OpenShift (RHOSO) data plane consists of RHEL 9.4 nodes. Use the OpenStackDataPlaneNodeSet custom resource definition (CRD) to create the custom resources (CRs) that define the nodes and the layout of the data plane. An OpenStackDataPlaneNodeSet CR is a logical grouping of nodes of a similar type.

To create and deploy a data plane with a routed spine-leaf network topology, you must perform the following tasks:

  1. Create a Secret CR for each node set for Ansible to use to execute commands on the data plane nodes.
  2. Create a BareMetalHost CR for each node in each node set, with virtual media as the boot method. You must configure the BareMetalHost CRs to use one of the following options to provide the base network connectivity for your spine-leaf environment:

    • External base networking: An external DHCP or Stateless Address Auto-Configuration (SLAAC) server that is not managed by Metal3, and that routes IP traffic to the Red Hat OpenShift Container Platform (RHOCP) cluster. If you use this option, the DHCP and SLAAC are used for the Ironic Python Agent (IPA), but they are not required in the final configuration of the deployed data plane node.
    • Network configuration on the ramdisk: The network configuration is embedded in the virtual media ramdisk. You can use this method if your RHOSO deployment does not use automatic network configuration through DHCP or SLAAC. You provide the network configuration for the ramdisk and the bare-metal node in advance to configure the interface addresses and to allow network traffic to flow to facilitate deployment.
  3. Create the OpenStackDataPlaneNodeSet CRs for each group of unprovisioned nodes in a leaf. You can define as many node sets as necessary for your deployment.
  4. Create the OpenStackDataPlaneDeployment CR that triggers the Ansible execution that deploys and configures the software for the specified list of OpenStackDataPlaneNodeSet CRs.

You can add additional node sets to a deployed environment, and you can customize your deployed environment by updating the common configuration in the default ConfigMap CR for the service, and by creating custom services. For more information about how to customize your data plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.

6.1. Prerequisites

  • An operational control plane, created with the OpenStack Operator. For more information, see Creating the control plane.
  • Cluster Baremetal Operator (CBO) is installed and configured for provisioning. For more information, see Planning provisioning for bare-metal data plane nodes in Planning your deployment.
  • A Provisioning CR is available in RHOCP. For more information about creating a Provisioning CR, see Configuring a provisioning resource to scale user-provisioned clusters in the Red Hat OpenShift Container Platform (RHOCP) Installing on bare metal guide.
  • IP connectivity exists between the Red Hat OpenShift Container Platform (RHOCP) cluster and the Baseboard Management Controller (BMC) of the bare-metal node, so that commands can be transmitted to the BMC, and the BMC can download the Virtual Media image.
  • Your network environment DHCP environment must match the cluster IP version.
  • You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges.

6.2. Creating the data plane secrets

The data plane requires several Secret custom resources (CRs) to operate. The Secret CRs are used by the data plane nodes for the following functionality:

  • To enable secure access between nodes:

    • You must generate an SSH key and create an SSH key Secret CR for each key to enable Ansible to manage the RHEL nodes on the data plane. Ansible executes commands with this user and key. You can create an SSH key for each OpenStackDataPlaneNodeSet CR in your data plane.
    • You must generate an SSH key and create an SSH key Secret CR for each key to enable migration of instances between Compute nodes.
  • To register the operating system of the nodes that are not registered to the Red Hat Customer Portal.
  • To enable repositories for the nodes.
  • To provide Compute nodes with access to libvirt.

Prerequisites

  • Pre-provisioned nodes are configured with an SSH public key in the $HOME/.ssh/authorized_keys file for a user with passwordless sudo privileges. For more information, see Managing sudo access in the RHEL Configuring basic system settings guide.

Procedure

  1. For unprovisioned nodes, create the SSH key pair for Ansible:

    $ ssh-keygen -f <key_file_name> -N "" -t rsa -b 4096
    Copy to Clipboard Toggle word wrap
    • Replace <key_file_name> with the name to use for the key pair.
  2. Create the Secret CR for Ansible and apply it to the cluster:

    $ oc create secret generic dataplane-ansible-ssh-private-key-secret \
    --save-config \
    --dry-run=client \
    --from-file=ssh-privatekey=<key_file_name> \
    --from-file=ssh-publickey=<key_file_name>.pub \
    [--from-file=authorized_keys=<key_file_name>.pub] -n openstack \
    -o yaml | oc apply -f -
    Copy to Clipboard Toggle word wrap
    • Replace <key_file_name> with the name and location of your SSH key pair file.
    • Optional: Only include the --from-file=authorized_keys option for bare-metal nodes that must be provisioned when creating the data plane.
  3. If you are creating Compute nodes, create a secret for migration.

    1. Create the SSH key pair for instance migration:

      $ ssh-keygen -f ./nova-migration-ssh-key -t ecdsa-sha2-nistp521 -N ''
      Copy to Clipboard Toggle word wrap
    2. Create the Secret CR for migration and apply it to the cluster:

      $ oc create secret generic nova-migration-ssh-key \
      --save-config \
      --from-file=ssh-privatekey=nova-migration-ssh-key \
      --from-file=ssh-publickey=nova-migration-ssh-key.pub \
      -n openstack \
      -o yaml | oc apply -f -
      Copy to Clipboard Toggle word wrap
  4. For nodes that have not been registered to the Red Hat Customer Portal, create the Secret CR for subscription-manager credentials to register the nodes:

    $ oc create secret generic subscription-manager \
    --from-literal rhc_auth='{"login": {"username": "<subscription_manager_username>", "password": "<subscription_manager_password>"}}'
    Copy to Clipboard Toggle word wrap
    • Replace <subscription_manager_username> with the username you set for subscription-manager.
    • Replace <subscription_manager_password> with the password you set for subscription-manager.
  5. Create a Secret CR that contains the Red Hat registry credentials:

    $ oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"<username>": "<password>"}}'
    Copy to Clipboard Toggle word wrap
    • Replace <username> and <password> with your Red Hat registry username and password credentials.

      For information about how to create your registry service account, see the Knowledge Base article Creating Registry Service Accounts.

  6. If you are creating Compute nodes, create a secret for libvirt.

    1. Create a file on your workstation named secret_libvirt.yaml to define the libvirt secret:

      apiVersion: v1
      kind: Secret
      metadata:
       name: libvirt-secret
       namespace: openstack
      type: Opaque
      data:
       LibvirtPassword: <base64_password>
      Copy to Clipboard Toggle word wrap
      • Replace <base64_password> with a base64-encoded string with maximum length 63 characters. You can use the following command to generate a base64-encoded password:

        $ echo -n <password> | base64
        Copy to Clipboard Toggle word wrap
        Tip

        If you do not want to base64-encode the username and password, you can use the stringData field instead of the data field to set the username and password.

    2. Create the Secret CR:

      $ oc apply -f secret_libvirt.yaml -n openstack
      Copy to Clipboard Toggle word wrap
  7. Verify that the Secret CRs are created:

    $ oc describe secret dataplane-ansible-ssh-private-key-secret
    $ oc describe secret nova-migration-ssh-key
    $ oc describe secret subscription-manager
    $ oc describe secret redhat-registry
    $ oc describe secret libvirt-secret
    Copy to Clipboard Toggle word wrap

6.3. Creating the BareMetalHost CRs with external base networking

You can use Redfish Virtual Media to create your spine-leaf network topology with base connectivity provided by an external DHCP or Stateless Address Auto-Configuration (SLAAC) server that is not managed by Metal3. The network must route IP traffic to the Red Hat OpenShift Container Platform (RHOCP) cluster. At a minimum, you must provide the data required to add the bare-metal data plane node on the network so that the remaining installation steps can access the node and perform the configuration.

Note

If you use the ctlplane interface for provisioning, to avoid the kernel rp_filter logic from dropping traffic, configure the DHCP service to use an address range different from the ctlplane address range. This ensures that the return traffic remains on the machine network interface.

Procedure

  1. The Bare Metal Operator (BMO) manages BareMetalHost custom resources (CRs) in the openshift-machine-api namespace by default. Update the Provisioning CR to watch all namespaces:

    $ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'
    Copy to Clipboard Toggle word wrap
  2. Update the Provisioning CR to enable virtualMediaViaExternalNetwork, which enables bare-metal connectivity through the external network:

    $ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"virtualMediaViaExternalNetwork": true }}'
    Copy to Clipboard Toggle word wrap
  3. Create a file on your workstation named bmh_leaf1_nodes.yaml that defines the Secret CR with the credentials for accessing the BMC of each bare-metal data plane node in the node set:

    apiVersion: v1
    kind: Secret
    metadata:
      name: edpm-compute-0-bmc-secret
      namespace: openstack
    type: Opaque
    data:
      username: <base64_username>
      password: <base64_password>
    Copy to Clipboard Toggle word wrap
    • Replace <base64_username> and <base64_password> with strings that are base64-encoded. You can use the following command to generate a base64-encoded string:

      $ echo -n <string> | base64
      Copy to Clipboard Toggle word wrap
      Tip

      If you don’t want to base64-encode the username and password, you can use the stringData field instead of the data field to set the username and password.

  4. Create a file on your workstation that defines the BareMetalHost CR for each bare-metal data plane node, with virtual media as the boot method:

    apiVersion: metal3.io/v1alpha1
    kind: BareMetalHost
    metadata:
      name: edpm-compute-0
      namespace: openstack
    labels:
      app: openstack
      workload: compute
    spec:
      bmc:
        address: redfish-virtualmedia+http://192.168.111.1:8000/redfish/v1/Systems/e8efd888-f844-4fe0-9e2e-498f4ab7806d
    
    1
    
        credentialsName: edpm-compute-0-bmc-secret 
    2
    
      bootMACAddress: 00:c7:e4:a7:e7:f3
      bootMode: UEFI
      online: false
    Copy to Clipboard Toggle word wrap
    1
    The URL for communicating with the node’s Baseboard Management Controller (BMC) controller. For more information about how to create a BareMetalHost CR, see About the BareMetalHost resource in the RHOCP Postinstallation configuration guide. For information on BMC addressing for other boot methods, see BMC addressing in the RHOCP Deploying installer-provisioned clusters on bare metal guide.
    2
    The name of the Secret CR you created in the previous step for accessing the BMC of the node.
  5. Create the BareMetalHost resources:

    $ oc create -f bmh_leaf1_nodes.yaml
    Copy to Clipboard Toggle word wrap
  6. Verify that the BareMetalHost resources have been created and are in the Available state:

    $ oc get bmh
    NAME         STATE            CONSUMER              ONLINE   ERROR   AGE
    edpm-compute-0   Available      openstack-edpm        true             2d21h
    Copy to Clipboard Toggle word wrap

You can use Redfish Virtual Media to create your spine-leaf topology with network configuration on the ramdisk if your Red Hat OpenStack Services on OpenShift (RHOSO) deployment does not use automatic network configuration through DHCP or SLAAC.

Red Hat OpenShift Container Platform (RHOCP) uses nmstate to report on and configure the state of the node network. You create a Secret custom resource (CR) for each bare-metal data plane node and use the nmstate schema to configure the pre-provisioning network configuration data that the ramdisk requires to add the bare-metal data plane node on the network. For more information on Nmstate, see Introduction to Nmstate.

Note

If you use the ctlplane interface for provisioning, to avoid the kernel rp_filter logic from dropping traffic, configure the ramdisk network to use an address range different from the ctlplane address range. This ensures that when the ramdisk connects to the provisioning service, the return traffic remains on the machine network interface.

Procedure

  1. Create a Secret CR for each bare-metal data plane node in the node set, that defines the pre-provisioning network configuration data for the ramdisk in nmstate format:

    apiVersion: v1
    kind: Secret
    metadata:
      name: <bmh-name>-preprovision-network-data
      namespace: openstack
    stringData:
      nmstate: |
        interfaces:
          - name: enp5s0
            type: ethernet
            state: up
            ipv4:
              enabled: true
              address:
              - ip: 192.168.130.100
                prefix-length: 24
        dns-resolver:
          config:
            server:
              - 192.168.122.1
            routes:
              config:
              - destination: 0.0.0.0/0
                next-hop-address: 192.168.130.1
                next-hop-interface: enp5s0
        type: Opaque
    Copy to Clipboard Toggle word wrap
    • Replace <bmh-name> with the name of the BareMetalHost CR the secret is for, for example, edpm-compute-0-preprovision-network-data.

    For more information about the nmstate schema, see https://nmstate.io/devel/yaml_api.html.

  2. The Bare Metal Operator (BMO) manages BareMetalHost custom resources (CRs) in the openshift-machine-api namespace by default. Update the Provisioning CR to watch all namespaces:

    $ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'
    Copy to Clipboard Toggle word wrap
  3. Update the Provisioning CR to enable virtualMediaViaExternalNetwork, which enables bare-metal connectivity through the external network:

    $ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"virtualMediaViaExternalNetwork": true }}'
    Copy to Clipboard Toggle word wrap
  4. Create a file on your workstation that defines the Secret CR with the credentials for accessing the BMC of each bare-metal data plane node in the node set:

    apiVersion: v1
    kind: Secret
    metadata:
      name: edpm-compute-0-bmc-secret
      namespace: openstack
    type: Opaque
    data:
      username: <base64_username>
      password: <base64_password>
    Copy to Clipboard Toggle word wrap
    • Replace <base64_username> and <base64_password> with strings that are base64-encoded. You can use the following command to generate a base64-encoded string:

      $ echo -n <string> | base64
      Copy to Clipboard Toggle word wrap
      Tip

      If you don’t want to base64-encode the username and password, you can use the stringData field instead of the data field to set the username and password.

  5. Create a file on your workstation named bmh_leaf1_nodes.yaml that defines the BareMetalHost CR for each bare-metal data plane node, with virtual media as the boot method:

    apiVersion: metal3.io/v1alpha1
    kind: BareMetalHost
    metadata:
      name: edpm-compute-0
      namespace: openstack
    labels:
      app: openstack
      workload: compute
    spec:
      bmc:
        address: redfish-virtualmedia+http://192.168.111.1:8000/redfish/v1/Systems/e8efd888-f844-4fe0-9e2e-498f4ab7806d
    
    1
    
        credentialsName: edpm-compute-0-bmc-secret 
    2
    
      bootMACAddress: 00:c7:e4:a7:e7:f3
      bootMode: UEFI
      online: false
    Copy to Clipboard Toggle word wrap
    1
    The URL for communicating with the node’s Baseboard Management Controller (BMC) controller. For more information about how to create a BareMetalHost CR, see About the BareMetalHost resource in the RHOCP Postinstallation configuration guide. For information on BMC addressing for other boot methods, see BMC addressing in the RHOCP Deploying installer-provisioned clusters on bare metal guide.
    2
    The name of the Secret CR you created in the previous step for accessing the BMC of the node.
  6. Add the preprovisioningNetworkDataName field to each BareMetalHost CR to specify the pre-provisioning network configuration data Secret CR:

    apiVersion: metal3.io/v1alpha1
    kind: BareMetalHost
    metadata:
    name: edpm-compute-0
    namespace: openstack
    ...
    spec:
      bmc:
        address: redfish-virtualmedia+http://192.168.111.1:8000/redfish/v1/Systems/e8efd888-f844-4fe0-9e2e-498f4ab7806d
      ...
      preprovisioningNetworkDataName: <pre_provision_network_secret>
    Copy to Clipboard Toggle word wrap
    • Replace <pre_provision_network_secret> with the Secret CR you created in step 1 for the pre-provisioning network configuration data.
  7. Create the BareMetalHost resources:

    $ oc create -f bmh_leaf1_nodes.yaml
    Copy to Clipboard Toggle word wrap
  8. Verify that the BareMetalHost resources have been created and are in the Available state:

    $ oc get bmh
    NAME         STATE            CONSUMER              ONLINE   ERROR   AGE
    edpm-compute-0   Available      openstack-edpm        true             2d21h
    Copy to Clipboard Toggle word wrap

Create an OpenStackDataPlaneNodeSet custom resource (CR) for each leaf on your data plane that defines the unprovisioned leaf nodes. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR. Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1. If you customize your control plane to include additional Compute cells, you must specify the cell to which the node set is connected. For more information on adding Compute cells, see Connecting an OpenStackDataPlaneNodeSet CR to a Compute cell in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.

You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodeTemplate.nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.

Procedure

  1. Create YAML files on your workstation that define the OpenStackDataPlaneNodeSet CRs for each leaf in the spine-leaf topology:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: data-plane-leaf1 
    1
    
      namespace: openstack
    spec:
      tlsEnabled: true
      env: 
    2
    
        - name: ANSIBLE_FORCE_COLOR
          value: "True"
    Copy to Clipboard Toggle word wrap
    1
    The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. Update the name in this example to a name that reflects the nodes in the set.
    2
    Optional: A list of environment variables to pass to the pod.
  2. Connect the data plane to the control plane network:

    spec:
      ...
      networkAttachments:
        - ctlplane
    Copy to Clipboard Toggle word wrap
  3. Specify that the nodes in this set are unprovisioned and must be provisioned when creating the resource:

      preProvisioned: false
    Copy to Clipboard Toggle word wrap
  4. Use the baremetalSetTemplate field to describe the configuration of the bare-metal nodes that are provisioned when the data plane is deployed:

      baremetalSetTemplate:
        deploymentSSHSecret: dataplane-ansible-ssh-private-key-secret
        bmhNamespace: <bmh_namespace>
        cloudUserName: <ansible_ssh_user>
        bmhLabelSelector:
          app: <bmh_label>
        ctlplaneInterface: <interface>
    Copy to Clipboard Toggle word wrap
    • Replace <bmh_namespace> with the namespace defined in the corresponding BareMetalHost CR for the node, for example, openstack.
    • Replace <ansible_ssh_user> with the username of the Ansible SSH user, for example, cloud-admin.
    • Replace <bmh_label> with the label defined in the corresponding BareMetalHost CR for the node, for example, openstack.
    • Replace <interface> with the control plane interface the node connects to, for example, enp6s0.
  5. Add the SSH key secret that you created to enable Ansible to connect to the data plane nodes:

      nodeTemplate:
        ansibleSSHPrivateKeySecret: <secret-key>
    Copy to Clipboard Toggle word wrap
    • Replace <secret-key> with the name of the SSH key Secret CR you created in Creating the data plane secrets, for example, dataplane-ansible-ssh-private-key-secret.
  6. Create a Persistent Volume Claim (PVC) in the openstack namespace on your RHOCP cluster to store logs. Set the volumeMode to Filesystem and accessModes to ReadWriteOnce. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and the ansible-runner creates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment.
  7. Enable persistent logging for the data plane nodes:

      nodeTemplate:
        ...
        extraMounts:
          - extraVolType: Logs
            volumes:
            - name: ansible-logs
              persistentVolumeClaim:
                claimName: <pvc_name>
            mounts:
            - name: ansible-logs
              mountPath: "/runner/artifacts"
    Copy to Clipboard Toggle word wrap
    • Replace <pvc_name> with the name of the PVC storage on your RHOCP cluster.
  8. Specify the management network:

      nodeTemplate:
        ...
        managementNetwork: ctlplane
    Copy to Clipboard Toggle word wrap
  9. Specify the Secret CRs that Ansible uses to source the usernames and passwords to register the operating system of the nodes to the Red Hat Customer Portal, and enable repositories for your nodes. The following example demonstrates how to register your nodes to CDN. For details on how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.

      nodeTemplate:
        ansible:
          ansibleUser: cloud-admin 
    1
    
          ansiblePort: 22
          ansibleVarsFrom:
            - secretRef:
                name: subscription-manager
            - secretRef:
                name: redhat-registry
          ansibleVars: 
    2
    
            rhc_release: 9.4
            rhc_repositories:
                - {name: "*", state: disabled}
                - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
                - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
                - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
                - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
                - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
                - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
            edpm_bootstrap_release_version_package: []
    Copy to Clipboard Toggle word wrap
    1
    The user associated with the secret you created in Creating the data plane secrets.
    2
    The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/.

    For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273. For information about how to log into registry.redhat.io, see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6.

  10. Add the network configuration template to apply to your data plane nodes. The following example applies the single NIC VLANs network configuration to the data plane nodes:

      nodeTemplate:
        ...
        ansible:
          ...
          ansibleVars:
            ...
            neutron_physical_bridge_name: br-ex
            neutron_public_interface_name: eth0
            edpm_network_config_template: |
              ---
              {% set mtu_list = [ctlplane_mtu] %}
              {% for network in nodeset_networks %}
              {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
              {%- endfor %}
              {% set min_viable_mtu = mtu_list | max %}
              network_config:
              - type: ovs_bridge
                name: {{ neutron_physical_bridge_name }}
                mtu: {{ min_viable_mtu }}
                use_dhcp: false
                dns_servers: {{ ctlplane_dns_nameservers }}
                domain: {{ dns_search_domains }}
                addresses:
                - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
                routes: {{ ctlplane_host_routes }}
                members:
                - type: interface
                  name: nic1
                  mtu: {{ min_viable_mtu }}
                  # force the MAC address of the bridge to this interface
                  primary: true
              {% for network in nodeset_networks %}
                - type: vlan
                  mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
                  vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
                  addresses:
                  - ip_netmask:
                      {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
                  routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
              {% endfor %}
    Copy to Clipboard Toggle word wrap

    For more information about data plane network configuration, see Customizing data plane networks in Configuring network services.

  11. Add the common configuration for the set of nodes in this group under the nodeTemplate section. Each node in this OpenStackDataPlaneNodeSet inherits this configuration. For information about the properties you can use to configure common node attributes, see OpenStackDataPlaneNodeSet CR properties.
  12. Define each node in this node set:

      nodes:
        edpm-compute-0: 
    1
    
          hostName: edpm-compute-0
          networks: 
    2
    
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.100 
    3
    
          - name: internalapi
            subnetName: subnet1
          - name: storage
            subnetName: subnet1
          - name: tenant
            subnetName: subnet1
          ansible:
            ansibleHost: 192.168.122.100
            ansibleUser: cloud-admin
            ansibleVars: 
    4
    
              fqdn_internal_api: edpm-compute-0.example.com
        edpm-compute-1:
          hostName: edpm-compute-1
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.101
          - name: internalapi
            subnetName: subnet1
          - name: storage
            subnetName: subnet1
          - name: tenant
            subnetName: subnet1
          ansible:
            ansibleHost: 192.168.122.101
            ansibleUser: cloud-admin
            ansibleVars:
              fqdn_internal_api: edpm-compute-1.example.com
    Copy to Clipboard Toggle word wrap
    1
    The node definition reference, for example, edpm-compute-0. Each node in the node set must have a node definition.
    2
    Defines the IPAM and the DNS records for the node.
    3
    Specifies a predictable IP address for the network that must be in the allocation range defined for the network in the NetConfig CR.
    4
    Node-specific Ansible variables that customize the node.
    Note
    • Nodes defined within the nodes section can configure the same Ansible variables that are configured in the nodeTemplate section. Where an Ansible variable is configured for both a specific node and within the nodeTemplate section, the node-specific values override those from the nodeTemplate section.
    • You do not need to replicate all the nodeTemplate Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node.
    • Many ansibleVars include edpm in the name, which stands for "External Data Plane Management".

    For information about the properties you can use to configure node attributes, see OpenStackDataPlaneNodeSet CR properties.

  13. Save the openstack_unprovisioned_node_set.yaml definition file.
  14. Create the data plane resources:

    $ oc create --save-config -f openstack_unprovisioned_node_set.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  15. Verify that the data plane resources have been created by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m
    Copy to Clipboard Toggle word wrap

    When the status is SetupReady, the command returns a condition met message, otherwise it returns a timeout error.

    For information about the data plane conditions and states, see Data plane conditions and states.

  16. Verify that the Secret resource was created for the node set:

    $ oc get secret -n openstack | grep openstack-data-plane
    dataplanenodeset-openstack-data-plane Opaque 1 3m50s
    Copy to Clipboard Toggle word wrap
  17. Verify that the nodes have transitioned to the provisioned state:

    $ oc get bmh
    NAME            STATE         CONSUMER               ONLINE   ERROR   AGE
    edpm-compute-0  provisioned   openstack-data-plane   true             3d21h
    Copy to Clipboard Toggle word wrap
  18. Verify that the services were created:

    $ oc get openstackdataplaneservice -n openstack
    NAME                    AGE
    bootstrap               8m40s
    ceph-client             8m40s
    ceph-hci-pre            8m40s
    configure-network       8m40s
    configure-os            8m40s
    ...
    Copy to Clipboard Toggle word wrap

6.6. OpenStackDataPlaneNodeSet CR spec properties

The following sections detail the OpenStackDataPlaneNodeSet CR spec properties you can configure.

6.6.1. nodeTemplate

Defines the common attributes for the nodes in this OpenStackDataPlaneNodeSet. You can override these common attributes in the definition for each individual node.

Expand
Table 6.1. nodeTemplate properties
FieldDescription

ansibleSSHPrivateKeySecret

Name of the private SSH key secret that contains the private SSH key for connecting to nodes.

Secret name format: Secret.data.ssh-privatekey

For more information, see Creating an SSH authentication secret.

Default: dataplane-ansible-ssh-private-key-secret

managementNetwork

Name of the network to use for management (SSH/Ansible). Default: ctlplane

networks

Network definitions for the OpenStackDataPlaneNodeSet.

ansible

Ansible configuration options. For more information, see ansible properties.

extraMounts

The files to mount into an Ansible Execution Pod.

userData

UserData configuration for the OpenStackDataPlaneNodeSet.

networkData

NetworkData configuration for the OpenStackDataPlaneNodeSet.

6.6.2. nodes

Defines the node names and node-specific attributes for the nodes in this OpenStackDataPlaneNodeSet. Overrides the common attributes defined in the nodeTemplate.

Expand
Table 6.2. nodes properties
FieldDescription

ansible

Ansible configuration options. For more information, see ansible properties.

extraMounts

The files to mount into an Ansible Execution Pod.

hostName

The node name.

managementNetwork

Name of the network to use for management (SSH/Ansible).

networkData

NetworkData configuration for the node.

networks

Instance networks.

userData

Node-specific user data.

6.6.3. ansible

Defines the group of Ansible configuration options.

Expand
Table 6.3. ansible properties
FieldDescription

ansibleUser

The user associated with the secret you created in Creating the data plane secrets. Default: rhel-user

ansibleHost

SSH host for the Ansible connection.

ansiblePort

SSH port for the Ansible connection.

ansibleVars

The Ansible variables that customize the set of nodes. You can use this property to configure any custom Ansible variable, including the Ansible variables available for each edpm-ansible role. For a complete list of Ansible variables by role, see the edpm-ansible documentation.

Note

The ansibleVars parameters that you can configure for an OpenStackDataPlaneNodeSet CR are determined by the services defined for the OpenStackDataPlaneNodeSet. The OpenStackDataPlaneService CRs call the Ansible playbooks from the edpm-ansible playbook collection, which include the roles that are executed as part of the data plane service.

ansibleVarsFrom

A list of sources to populate Ansible variables from. Values defined by an AnsibleVars with a duplicate key take precedence. For more information, see ansibleVarsFrom properties.

6.6.4. ansibleVarsFrom

Defines the list of sources to populate Ansible variables from.

Expand
Table 6.4. ansibleVarsFrom properties
FieldDescription

prefix

An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.

configMapRef

The ConfigMap CR to select the ansibleVars from.

secretRef

The Secret CR to select the ansibleVars from.

6.7. Deploying the data plane

You use the OpenStackDataPlaneDeployment custom resource definition (CRD) to configure the services on the data plane nodes and deploy the data plane. You control the execution of Ansible on the data plane by creating OpenStackDataPlaneDeployment custom resources (CRs). Each OpenStackDataPlaneDeployment CR models a single Ansible execution. Create an OpenStackDataPlaneDeployment CR to deploy each of your OpenStackDataPlaneNodeSet CRs.

Note

When the OpenStackDataPlaneDeployment successfully completes execution, it does not automatically execute the Ansible again, even if the OpenStackDataPlaneDeployment or related OpenStackDataPlaneNodeSet resources are changed. To start another Ansible execution, you must create another OpenStackDataPlaneDeployment CR. Remove any failed OpenStackDataPlaneDeployment CRs in your environment before creating a new one to allow the new OpenStackDataPlaneDeployment to run Ansible with an updated Secret.

Procedure

  1. Create a file on your workstation named openstack_data_plane_deploy.yaml to define the OpenStackDataPlaneDeployment CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: data-plane-deploy
      namespace: openstack
    Copy to Clipboard Toggle word wrap
    • metadata.name: The OpenStackDataPlaneDeployment CR name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the node sets in the deployment.
  2. Add all the OpenStackDataPlaneNodeSet CRs that you want to deploy:

    spec:
      nodeSets:
        - openstack-data-plane
        - <nodeSet_name>
        - ...
        - <nodeSet_name>
    Copy to Clipboard Toggle word wrap
    • Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment.
  3. Save the openstack_data_plane_deploy.yaml deployment file.
  4. Deploy the data plane:

    $ oc create -f openstack_data_plane_deploy.yaml -n openstack
    Copy to Clipboard Toggle word wrap

    You can view the Ansible logs while the deployment executes:

    $ oc get pod -l app=openstackansibleee -w
    $ oc logs -l app=openstackansibleee -f --max-log-requests 10
    Copy to Clipboard Toggle word wrap

    If the oc logs command returns an error similar to the following error, increase the --max-log-requests value:

    error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
    Copy to Clipboard Toggle word wrap
  5. Verify that the data plane is deployed:

    $ oc wait openstackdataplanedeployment data-plane-deploy --for=condition=Ready --timeout=<timeout_value>
    $ oc wait openstackdataplanenodeset openstack-data-plane --for=condition=Ready --timeout=<timeout_value>
    Copy to Clipboard Toggle word wrap
    • Replace <timeout_value> with a value in minutes you want the command to wait for completion of the task. For example, if you want the command to wait 60 minutes, you would use the value 60m. If the completion status of SetupReady for oc wait openstackdataplanedeployment or NodeSetReady for oc wait openstackdataplanenodeset is not returned in this time frame, the command returns a timeout error. Use a value that is appropriate to the size of your deployment. Give larger deployments more time to complete deployment tasks.

      For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

  6. Map the Compute nodes to the Compute cell that they are connected to:

    $ oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose
    Copy to Clipboard Toggle word wrap

    If you did not create additional cells, this command maps the Compute nodes to cell1.

  7. Access the remote shell for the openstackclient pod and verify that the deployed Compute nodes are visible on the control plane:

    $ oc rsh -n openstack openstackclient
    $ openstack hypervisor list
    Copy to Clipboard Toggle word wrap

    If some Compute nodes are missing form the hypervisor list, retry the previous step. If the Compute nodes are still missing from the list, check the status and health of the nova-compute services on the deployed data plane nodes.

  8. Verify that the hypervisor hostname is a fully qualified domain name (FQDN):

    $ hostname -f
    Copy to Clipboard Toggle word wrap

    If the hypervisor hostname is not an FQDN, for example, if it was registered as a short name or full name instead, contact Red Hat Support.

6.8. Data plane conditions and states

Each data plane resource has a series of conditions within their status subresource that indicates the overall state of the resource, including its deployment progress.

For an OpenStackDataPlaneNodeSet, until an OpenStackDataPlaneDeployment has been started and finished successfully, the Ready condition is False. When the deployment succeeds, the Ready condition is set to True. A subsequent deployment sets the Ready condition to False until the deployment succeeds, when the Ready condition is set to True.

Expand
Table 6.5. OpenStackDataPlaneNodeSet CR conditions
ConditionDescription

Ready

  • "True": The OpenStackDataPlaneNodeSet CR is successfully deployed.
  • "False": The deployment is not yet requested or has failed, or there are other failed conditions.

SetupReady

"True": All setup tasks for a resource are complete. Setup tasks include verifying the SSH key secret, verifying other fields on the resource, and creating the Ansible inventory for each resource. Each service-specific condition is set to "True" when that service completes deployment. You can check the service conditions to see which services have completed their deployment, or which services failed.

DeploymentReady

"True": The NodeSet has been successfully deployed.

InputReady

"True": The required inputs are available and ready.

NodeSetDNSDataReady

"True": DNSData resources are ready.

NodeSetIPReservationReady

"True": The IPSet resources are ready.

NodeSetBaremetalProvisionReady

"True": Bare-metal nodes are provisioned and ready.

Expand
Table 6.6. OpenStackDataPlaneNodeSet status fields
Status fieldDescription

Deployed

  • "True": The OpenStackDataPlaneNodeSet CR is successfully deployed.
  • "False": The deployment is not yet requested or has failed, or there are other failed conditions.

DNSClusterAddresses

 

CtlplaneSearchDomain

 
Expand
Table 6.7. OpenStackDataPlaneDeployment CR conditions
ConditionDescription

Ready

  • "True": The data plane is successfully deployed.
  • "False": The data plane deployment failed, or there are other failed conditions.

DeploymentReady

"True": The data plane is successfully deployed.

InputReady

"True": The required inputs are available and ready.

<NodeSet> Deployment Ready

"True": The deployment has succeeded for the named NodeSet, indicating all services for the NodeSet have succeeded.

<NodeSet> <Service> Deployment Ready

"True": The deployment has succeeded for the named NodeSet and Service. Each <NodeSet> <Service> Deployment Ready specific condition is set to "True" as that service completes successfully for the named NodeSet. Once all services are complete for a NodeSet, the <NodeSet> Deployment Ready condition is set to "True". The service conditions indicate which services have completed their deployment, or which services failed and for which NodeSets.

Expand
Table 6.8. OpenStackDataPlaneDeployment status fields
Status fieldDescription

Deployed

  • "True": The data plane is successfully deployed. All Services for all NodeSets have succeeded.
  • "False": The deployment is not yet requested or has failed, or there are other failed conditions.
Expand
Table 6.9. OpenStackDataPlaneService CR conditions
ConditionDescription

Ready

"True": The service has been created and is ready for use. "False": The service has failed to be created.

6.9. Troubleshooting data plane creation and deployment

To troubleshoot a deployment when services are not deploying or operating correctly, you can check the job condition message for the service, and you can check the logs for a node set.

6.9.1. Checking the job condition message for a service

Each data plane deployment in the environment has associated services. Each of these services have a job condition message that matches the current status of the AnsibleEE job executing for that service. You can use this information to troubleshoot deployments when services are not deploying or operating correctly.

Procedure

  1. Determine the name and status of all deployments:

    $ oc get openstackdataplanedeployment
    Copy to Clipboard Toggle word wrap

    The following example output shows two deployments currently in progress:

    $ oc get openstackdataplanedeployment
    
    NAME                   NODESETS             STATUS   MESSAGE
    edpm-compute   ["openstack-edpm-ipam"]   False    Deployment in progress
    Copy to Clipboard Toggle word wrap
  2. Retrieve and inspect Ansible execution jobs.

    The Kubernetes jobs are labelled with the name of the OpenStackDataPlaneDeployment. You can list jobs for each OpenStackDataPlaneDeployment by using the label:

     $ oc get job -l openstackdataplanedeployment=edpm-compute
     NAME                                                 STATUS     COMPLETIONS   DURATION   AGE
     bootstrap-edpm-compute-openstack-edpm-ipam           Complete   1/1           78s        25h
     configure-network-edpm-compute-openstack-edpm-ipam   Complete   1/1           37s        25h
     configure-os-edpm-compute-openstack-edpm-ipam        Complete   1/1           66s        25h
     download-cache-edpm-compute-openstack-edpm-ipam      Complete   1/1           64s        25h
     install-certs-edpm-compute-openstack-edpm-ipam       Complete   1/1           46s        25h
     install-os-edpm-compute-openstack-edpm-ipam          Complete   1/1           57s        25h
     libvirt-edpm-compute-openstack-edpm-ipam             Complete   1/1           2m37s      25h
     neutron-metadata-edpm-compute-openstack-edpm-ipam    Complete   1/1           61s        25h
     nova-edpm-compute-openstack-edpm-ipam                Complete   1/1           3m20s      25h
     ovn-edpm-compute-openstack-edpm-ipam                 Complete   1/1           78s        25h
     run-os-edpm-compute-openstack-edpm-ipam              Complete   1/1           33s        25h
     ssh-known-hosts-edpm-compute                         Complete   1/1           19s        25h
     telemetry-edpm-compute-openstack-edpm-ipam           Complete   1/1           2m5s       25h
     validate-network-edpm-compute-openstack-edpm-ipam    Complete   1/1           16s        25h
    Copy to Clipboard Toggle word wrap

    You can check logs by using oc logs -f job/<job-name>, for example, if you want to check the logs from the configure-network job:

     $ oc logs -f jobs/configure-network-edpm-compute-openstack-edpm-ipam | tail -n2
     PLAY RECAP *********************************************************************
     edpm-compute-0             : ok=22   changed=0    unreachable=0    failed=0    skipped=17   rescued=0    ignored=0
    Copy to Clipboard Toggle word wrap

6.9.1.1. Job condition messages

AnsibleEE jobs have an associated condition message that indicates the current state of the service job. This condition message is displayed in the MESSAGE field of the oc get job <job_name> command output. Jobs return one of the following conditions when queried:

  • Job not started: The job has not started.
  • Job not found: The job could not be found.
  • Job is running: The job is currently running.
  • Job complete: The job execution is complete.
  • Job error occurred <error_message>: The job stopped executing unexpectedly. The <error_message> is replaced with a specific error message.

To further investigate a service that is displaying a particular job condition message, view its logs by using the command oc logs job/<service>. For example, to view the logs for the repo-setup-openstack-edpm service, use the command oc logs job/repo-setup-openstack-edpm.

6.9.2. Checking the logs for a node set

You can access the logs for a node set to check for deployment issues.

Procedure

  1. Retrieve pods with the OpenStackAnsibleEE label:

    $ oc get pods -l app=openstackansibleee
    configure-network-edpm-compute-j6r4l   0/1     Completed           0          3m36s
    validate-network-edpm-compute-6g7n9    0/1     Pending             0          0s
    validate-network-edpm-compute-6g7n9    0/1     ContainerCreating   0          11s
    validate-network-edpm-compute-6g7n9    1/1     Running             0          13s
    Copy to Clipboard Toggle word wrap
  2. SSH into the pod you want to check:

    1. Pod that is running:

      $ oc rsh validate-network-edpm-compute-6g7n9
      Copy to Clipboard Toggle word wrap
    2. Pod that is not running:

      $ oc debug configure-network-edpm-compute-j6r4l
      Copy to Clipboard Toggle word wrap
  3. List the directories in the /runner/artifacts mount:

    $ ls /runner/artifacts
    configure-network-edpm-compute
    validate-network-edpm-compute
    Copy to Clipboard Toggle word wrap
  4. View the stdout for the required artifact:

    $ cat /runner/artifacts/configure-network-edpm-compute/stdout
    Copy to Clipboard Toggle word wrap

6.10. Additional resources

Nach oben
Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2025 Red Hat