Este contenido no está disponible en el idioma seleccionado.

Chapter 5. Configuring Networker nodes


In a Red Hat OpenStack Services on OpenShift (RHOSO) environment, you can add Networker nodes to the RHOSO data plane.

Networker nodes can serve as gateways to external networks.

With or without gateways, Networker nodes can serve other purposes as well. For example, Networker nodes are required when you deploy the neutron-dhcp-agent in a RHOSO environment that has a routed spine-leaf network topology with DHCP relays running on leaf nodes. Networker nodes can also provide metadata for SR-IOV ports.

If your NICs support DPDK, you can enable DPDK on the Networker node interfaces to accelerate gateway traffic processing.

Networker nodes are similar to other RHOSO data plane nodes such as Compute nodes. Like Compute nodes, Networker nodes use the RHEL 9.4 operating system. Networker nodes and Compute nodes share some common services and configuration features, and each has a set of role-specific services and configurations. For example, unlike Compute nodes, Networker nodes do not require the Nova or libvirt services.

A data plane typically consists of multiple OpenStackDataPlaneNodeSet custom resources (CRs) to define sets of nodes with different configurations and roles. For example, one node set might define your data plane Networker nodes. Others might define functionally related sets of Compute nodes.

You can use pre-provisioned or unprovisioned nodes in an OpenStackDataPlaneNodeSet CR:

  • Pre-provisioned node: You have used your own tooling to install the operating system on the node before adding it to the data plane.
  • Unprovisioned node: The node does not have an operating system installed before you add it to the data plane. The node is provisioned by using the Cluster Baremetal Operator (CBO) as part of the data plane creation and deployment process.
Note

You cannot include both pre-provisioned and unprovisioned nodes in the same OpenStackDataPlaneNodeSet CR.

To create and deploy a data plane with or without Networker nodes, you must perform the following tasks:

  1. Create a Secret CR for each node set for Ansible to use to execute commands on the data plane nodes (Networker nodes and Compute nodes).
  2. Create the OpenStackDataPlaneNodeSet CRs that define the nodes and layout of the data plane.

    One of the following procedures describes how to create Networker node sets with pre-provisioned nodes. The other describes how to create Networker node sets with unprovisioned bare-metal nodes that must be provisioned during the node set deployment.

  3. Create the OpenStackDataPlaneDeployment CR that triggers the Ansible execution that deploys and configures the software for the specified list of OpenStackDataPlaneNodeSet CRs.

5.1. Prerequisites

  • A functional control plane, created with the OpenStack Operator.
  • You are logged on to a workstation that has access to the Red Hat OpenShift Container Platform (RHOCP) cluster as a user with cluster-admin privileges.

5.2. Creating the data plane secrets

You must create the Secret custom resources (CRs) that the data plane requires to be able to operate. The Secret CRs are used by the data plane nodes to secure access between nodes, to register the node operating systems with the Red Hat Customer Portal, to enable node repositories, and to provide Compute nodes with access to libvirt.

To enable secure access between nodes, you must generate two SSH keys and create an SSH key Secret CR for each key:

  • An SSH key to enable Ansible to manage the RHEL nodes on the data plane. Ansible executes commands with this user and key. You can create an SSH key for each OpenStackDataPlaneNodeSet CR in your data plane.

    • An SSH key to enable migration of instances between Compute nodes.

Prerequisites

  • Pre-provisioned nodes are configured with an SSH public key in the $HOME/.ssh/authorized_keys file for a user with passwordless sudo privileges. For more information, see Managing sudo access in the RHEL Configuring basic system settings guide.

Procedure

  1. For unprovisioned nodes, create the SSH key pair for Ansible:

    $ ssh-keygen -f <key_file_name> -N "" -t rsa -b 4096
    Copy to Clipboard Toggle word wrap
    • Replace <key_file_name> with the name to use for the key pair.
  2. Create the Secret CR for Ansible and apply it to the cluster:

    $ oc create secret generic dataplane-ansible-ssh-private-key-secret \
    --save-config \
    --dry-run=client \
    --from-file=ssh-privatekey=<key_file_name> \
    --from-file=ssh-publickey=<key_file_name>.pub \
    [--from-file=authorized_keys=<key_file_name>.pub] -n openstack \
    -o yaml | oc apply -f -
    Copy to Clipboard Toggle word wrap
    • Replace <key_file_name> with the name and location of your SSH key pair file.
    • Optional: Only include the --from-file=authorized_keys option for bare-metal nodes that must be provisioned when creating the data plane.
  3. If you are creating Compute nodes, create a secret for migration.

    1. Create the SSH key pair for instance migration:

      $ ssh-keygen -f ./nova-migration-ssh-key -t ecdsa-sha2-nistp521 -N ''
      Copy to Clipboard Toggle word wrap
    2. Create the Secret CR for migration and apply it to the cluster:

      $ oc create secret generic nova-migration-ssh-key \
      --save-config \
      --from-file=ssh-privatekey=nova-migration-ssh-key \
      --from-file=ssh-publickey=nova-migration-ssh-key.pub \
      -n openstack \
      -o yaml | oc apply -f -
      Copy to Clipboard Toggle word wrap
  4. For nodes that have not been registered to the Red Hat Customer Portal, create the Secret CR for subscription-manager credentials to register the nodes:

    $ oc create secret generic subscription-manager \
    --from-literal rhc_auth='{"login": {"username": "<subscription_manager_username>", "password": "<subscription_manager_password>"}}'
    Copy to Clipboard Toggle word wrap
    • Replace <subscription_manager_username> with the username you set for subscription-manager.
    • Replace <subscription_manager_password> with the password you set for subscription-manager.
  5. Create a Secret CR that contains the Red Hat registry credentials:

    $ oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"<username>": "<password>"}}'
    Copy to Clipboard Toggle word wrap
    • Replace <username> and <password> with your Red Hat registry username and password credentials.

      For information about how to create your registry service account, see the Knowledge Base article Creating Registry Service Accounts.

  6. If you are creating Compute nodes, create a secret for libvirt.

    1. Create a file on your workstation named secret_libvirt.yaml to define the libvirt secret:

      apiVersion: v1
      kind: Secret
      metadata:
       name: libvirt-secret
       namespace: openstack
      type: Opaque
      data:
       LibvirtPassword: <base64_password>
      Copy to Clipboard Toggle word wrap
      • Replace <base64_password> with a base64-encoded string with maximum length 63 characters. You can use the following command to generate a base64-encoded password:

        $ echo -n <password> | base64
        Copy to Clipboard Toggle word wrap
        Tip

        If you do not want to base64-encode the username and password, you can use the stringData field instead of the data field to set the username and password.

    2. Create the Secret CR:

      $ oc apply -f secret_libvirt.yaml -n openstack
      Copy to Clipboard Toggle word wrap
  7. Verify that the Secret CRs are created:

    $ oc describe secret dataplane-ansible-ssh-private-key-secret
    $ oc describe secret nova-migration-ssh-key
    $ oc describe secret subscription-manager
    $ oc describe secret redhat-registry
    $ oc describe secret libvirt-secret
    Copy to Clipboard Toggle word wrap

You can define an OpenStackDataPlaneNodeSet CR for each logical grouping of pre-provisioned nodes in your data plane that are Networker nodes. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR.

You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.

Tip

For an example OpenStackDataPlaneNodeSet CR that configures a set of Networker nodes without OVS-DPDK from pre-provisioned Networker nodes, see Example OpenStackDataPlaneNodeSet CR for pre-provisioned Networker nodes.

+ For an example OpenStackDataPlaneNodeSet CR that configures a set of Networker nodes with OVS-DPDK from pre-provisioned Networker nodes, see Example OpenStackDataPlaneNodeSet CR for pre-provisioned Networker nodes with DPDK].

Procedure

  1. Create a file on your workstation named openstack_preprovisioned_networker_node_set.yaml to define the OpenStackDataPlaneNodeSet CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: networker-nodes 
    1
    
      namespace: openstack
    spec:
      env: 
    2
    
        - name: ANSIBLE_FORCE_COLOR
          value: "True"
    Copy to Clipboard Toggle word wrap
    1
    The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. If necessary, replace the example name networker-nodes with a name that more accurately describes your node set.
    2
    Optional: A list of environment variables to pass to the pod.
  2. Include the services field to override the default services. Remove the nova, libvirt, and other services that are not required by a Networker node:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: networker-nodes
      namespace: openstack
    spec:
    ...
      services:
       - redhat
       - bootstrap
       - download-cache
       - reboot-os
       - configure-ovs-dpdk  
    1
    
       - configure-network
       - validate-network
       - install-os
       - configure-os
       - ssh-known-hosts
       - run-os
       - install-certs
       - ovn
       - neutron-metadata   
    2
    
       - neutron-dhcp       
    3
    Copy to Clipboard Toggle word wrap
    1
    The configure-ovs-dpdk service is required only when DPDK nics are used in the deployment.
    2
    The neutron-metadata service is required only when SR-IOV ports are used in the deployment.
    3
    You can optionally run the neutron-dhcp service on your Networker nodes. You might not need to use neutron-dhcp with OVN if your deployment uses DHCP relays, or advanced DHCP options that are supported by dnsmasq but not by the OVN DHCP implementation. .
  3. Connect the data plane to the control plane network:

    spec:
      ...
      networkAttachments:
        - ctlplane
    Copy to Clipboard Toggle word wrap
  4. Enable the chassis as gateway:

    spec:
    ...
      nodeTemplate:
        ansible:
        ...
        edpm_enable_chassis_gw: true
    Copy to Clipboard Toggle word wrap
  5. Specify that the nodes in this set are pre-provisioned:

    spec:
    ...
      nodeTemplate:
        ansible:
        ...
        edpm_enable_chassis_gw: true
        ...
       preProvisioned: true
    Copy to Clipboard Toggle word wrap
  6. Add the SSH key secret that you created so that Ansible can connect to the data plane nodes:

      nodeTemplate:
        ansibleSSHPrivateKeySecret: <secret-key>
    Copy to Clipboard Toggle word wrap
    • Replace <secret-key> with the name of the SSH key Secret CR you created for this node set in <link>[Creating the data plane secrets], for example, dataplane-ansible-ssh-private-key-secret.
  7. Create a Persistent Volume Claim (PVC) in the openstack namespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set the volumeMode to Filesystem and accessModes to ReadWriteOnce. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and the ansible-runner creates a FIFO file to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment.
  8. Enable persistent logging for the Networker nodes:

      nodeTemplate:
        ...
        extraMounts:
          - extraVolType: Logs
            volumes:
            - name: ansible-logs
              persistentVolumeClaim:
                claimName: <pvc_name>
            mounts:
            - name: ansible-logs
              mountPath: "/runner/artifacts"
    Copy to Clipboard Toggle word wrap
    • Replace <pvc_name> with the name of the PVC storage on your RHOCP cluster.
  9. Specify the management network:

      nodeTemplate:
        ...
        managementNetwork: ctlplane
    Copy to Clipboard Toggle word wrap
  10. Specify the Secret CRs used to source the usernames and passwords to register the operating system of the nodes that are not registered to the Red Hat Customer Portal, and enable repositories for your nodes. The following example demonstrates how to register your nodes to Red Hat Content Delivery Network (CDN). For information about how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.

      nodeTemplate:
        ...
        ansible:
          ansibleUser: cloud-admin 
    1
    
          ansiblePort: 22
          ansibleVarsFrom:
            - secretRef:
                name: subscription-manager
            - secretRef:
                name: redhat-registry
          ansibleVars: 
    2
    
            rhc_release: 9.4
            rhc_repositories:
                - {name: "*", state: disabled}
                - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
                - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
                - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
                - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
                - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
                - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
            edpm_bootstrap_release_version_package: []
    Copy to Clipboard Toggle word wrap
    1
    The user associated with the secret you created in <link>[Creating the data plane secrets].
    2
    The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/.

    For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273. For information about how to log into registry.redhat.io, see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6.

  11. Add the network configuration template to apply to your Networker nodes.

      nodeTemplate:
        ...
        ansible:
          ...
          ansibleVars:
          ...
      nodes:
    
           ...
            neutron_physical_bridge_name: br-ex
            neutron_public_interface_name: eth0
            edpm_network_config_nmstate: true 
    1
    
            edpm_network_config_update: false 
    2
    Copy to Clipboard Toggle word wrap
    1
    Sets the os-net-config provider to nmstate. The default value is true. Change it to false only if a specific limitation of the nmstate provider requires you to use the ifcfg provider. For more information on advantages and limitations of the nmstate provider, see https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/planning_your_deployment/plan-networks_planning#plan-os-net-config_plan-network in Planning your deployment.
    2
    When deploying a node set for the first time, set the edpm_network_config_update variable to false. When updating or adopting a node set, set edpm_network_config_update to true.
    Important

    After an update or an adoption, you must reset edpm_network_config_update to false. Otherwise, the nodes could lose network access. Whenever edpm_network_config_update is true, the updated network configuration is reapplied every time an OpenStackDataPlaneDeployment CR is created that includes the configure-network service that is a member of the servicesOverride list.

    The following example applies a VLANs network configuration to a set of the data plane Networker nodes with DPDK:

            edpm_network_config_template: |
              ...
              {% set mtu_list = [ctlplane_mtu] %}
              {% for network in nodeset_networks %}
              {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
              {%- endfor %}
              {% set min_viable_mtu = mtu_list | max %}
              network_config:
              - type: ovs_user_bridge
                name: {{ neutron_physical_bridge_name }}
                mtu: {{ min_viable_mtu }}
                use_dhcp: false
                dns_servers: {{ ctlplane_dns_nameservers }}
                domain: {{ dns_search_domains }}
                addresses:
                - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
                routes: {{ ctlplane_host_routes }}
                members:
                - type: ovs_dpdk_port
                  name: dpdk0
                  members:
                  - type: interface
                    name: nic1
    
    
              - type: linux_bond
                name: bond_api
                use_dhcp: false
                bonding_options: "mode=active-backup"
                dns_servers: {{ ctlplane_dns_nameservers }}
                members:
                - type: interface
                  name: nic2
                  primary: true
    
    
              - type: vlan
                vlan_id: {{ lookup('vars', networks_lower['internalapi'] ~ '_vlan_id') }}
                device: bond_api
                addresses:
                - ip_netmask: {{ lookup('vars', networks_lower['internalapi'] ~ '_ip') }}/{{ lookup('vars', networks_lower['internalapi'] ~ '_cidr') }}
    
    
              - type: ovs_user_bridge
                name: br-link0
                use_dhcp: false
                ovs_extra: "set port br-link0 tag={{ lookup('vars', networks_lower['tenant'] ~ '_vlan_id') }}"
                addresses:
                - ip_netmask: {{ lookup('vars', networks_lower['tenant'] ~ '_ip') }}/{{ lookup('vars', networks_lower['tenant'] ~ '_cidr')}}
                members:
                - type: ovs_dpdk_bond
                  name: dpdkbond0
                  mtu: 9000
                  rx_queue: 1
                  ovs_extra: "set port dpdkbond0 bond_mode=balance-slb"
                  members:
                  - type: ovs_dpdk_port
                    name: dpdk1
                    members:
                    - type: interface
                      name: nic3
                  - type: ovs_dpdk_port
                    name: dpdk2
                    members:
                    - type: interface
                      name: nic4
    
    
              - type: ovs_user_bridge
                name: br-link1
                use_dhcp: false
                members:
                - type: ovs_dpdk_bond
                  name: dpdkbond1
                  mtu: 9000
                  rx_queue: 1
                  ovs_extra: "set port dpdkbond1 bond_mode=balance-slb"
                  members:
                  - type: ovs_dpdk_port
                    name: dpdk3
                    members:
                    - type: interface
                      name: nic5
                  - type: ovs_dpdk_port
                    name: dpdk4
                    members:
                    - type: interface
                      name: nic6
            neutron_physical_bridge_name: br-ex
    Copy to Clipboard Toggle word wrap

    The following example applies a VLANs network configuration to a set of data plane Networker nodes without DPDK:

    edpm_network_config_template: |
              …---
              {% set mtu_list = [ctlplane_mtu] %}
              {% for network in nodeset_networks %}
              {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
              {%- endfor %}
              {% set min_viable_mtu = mtu_list | max %}
              network_config:
                - type: ovs_bridge
                  name: {{ neutron_physical_bridge_name }}
                  mtu: {{ min_viable_mtu }}
                  use_dhcp: false
                  dns_servers: {{ ctlplane_dns_nameservers }}
                  domain: {{ dns_search_domains }}
                  addresses:
                    - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
                  routes: {{ ctlplane_host_routes }}
                  members:
                    - type: interface
                      name: nic2
                      mtu: {{ min_viable_mtu }}
                      # force the MAC address of the bridge to this interface
                      primary: true
              {% for network in nodeset_networks %}
                    - type: vlan
                      mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
                      vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
                      addresses:
                        - ip_netmask: >-
                            {{
                              lookup('vars', networks_lower[network] ~ '_ip')
                            }}/{{
                              lookup('vars', networks_lower[network] ~ '_cidr')
                            }}
                      routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
              {% endfor %}
    Copy to Clipboard Toggle word wrap

    For more information about data plane network configuration, see Customizing data plane networks in Configuring network services.

  12. Add the common configuration for the set of nodes in this group under the nodeTemplate section. Each node in this OpenStackDataPlaneNodeSet inherits this configuration. For information about the properties you can use to configure common node attributes, see OpenStackDataPlaneNodeSet CR spec properties in the Deploying Red Hat OpenStack Services on OpenShift guide.
  13. Define each node in this node set:

    ...
      nodes:
        edpm-networker-0: 
    1
    
          hostName: networker-0
          networks: 
    2
    
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.100 
    3
    
          - name: internalapi
            subnetName: subnet1
            fixedIP: 172.17.0.100
          - name: storage
            subnetName: subnet1
            fixedIP: 172.18.0.100
          - name: tenant
            subnetName: subnet1
            fixedIP: 172.19.0.100
          ansible:
            ansibleHost: 192.168.122.100
            ansibleUser: cloud-admin
            ansibleVars: 
    4
    
              fqdn_internal_api: edpm-networker-0.example.com
        edpm-networker-1:
          hostName: edpm-networker-1
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.101
          - name: internalapi
            subnetName: subnet1
            fixedIP: 172.17.0.101
          - name: storage
            subnetName: subnet1
            fixedIP: 172.18.0.101
          - name: tenant
            subnetName: subnet1
            fixedIP: 172.19.0.101
          ansible:
            ansibleHost: 192.168.122.101
            ansibleUser: cloud-admin
            ansibleVars:
              fqdn_internal_api: edpm-networker-1.example.com
    Copy to Clipboard Toggle word wrap
    1
    The node definition reference, for example, edpm-networker-0. Each node in the node set must have a node definition.
    2
    Defines the IPAM and the DNS records for the node.
    3
    Specifies a predictable IP address for the network that must be in the allocation range defined for the network in the NetConfig CR.
    4
    Node-specific Ansible variables that customize the node.
    Note
    • Nodes defined within the nodes section can configure the same Ansible variables that are configured in the nodeTemplate section. Where an Ansible variable is configured for both a specific node and within the nodeTemplate section, the node-specific values override those from the nodeTemplate section.
    • You do not need to replicate all the nodeTemplate Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node.
    • Many ansibleVars include edpm in the name, which stands for "External Data Plane Management".

    For information about the properties you can use to configure node attributes, see OpenStackDataPlaneNodeSet CR spec properties in the Deploying Red Hat OpenStack Services on OpenShift guide..

  14. Save the openstack_preprovisioned_networker_node_set.yaml definition file.
  15. Create the data plane resources:

    $ oc create --save-config -f openstack_preprovisioned_networker_node_set.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  16. Verify that the data plane resources have been created by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m
    Copy to Clipboard Toggle word wrap

    When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error.

    For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

  17. Verify that the Secret resource was created for the node set:

    $ oc get secret | grep openstack-data-plane
    dataplanenodeset-openstack-data-plane Opaque 1 3m50s
    Copy to Clipboard Toggle word wrap
  18. Verify the services were created:

    $ oc get openstackdataplaneservice -n openstack
    NAME                AGE
    bootstrap           46m
    ceph-client         46m
    ceph-hci-pre        46m
    configure-network   46m
    configure-os        46m
    ...
    Copy to Clipboard Toggle word wrap

The following example OpenStackDataPlaneNodeSet CR creates a node set from pre-provisioned Networker nodes with some node-specific configuration. Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that describes the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.

apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
  name: openstack-networker-nodes
  namespace: openstack
spec:
  services:
      - bootstrap
      - download-cache
      - reboot-os
      - configure-network
      - validate-network
      - install-os
      - configure-os
      - ssh-known-hosts
      - run-os
      - install-certs
      - ovn

  env:
    - name: ANSIBLE_FORCE_COLOR
      value: "True"
  networkAttachments:
    - ctlplane
  preProvisioned: true
  nodeTemplate:
    ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret
    extraMounts:
      - extraVolType: Logs
        volumes:
        - name: ansible-logs
          persistentVolumeClaim:
            claimName: <pvc_name>
        mounts:
        - name: ansible-logs
          mountPath: "/runner/artifacts"
    managementNetwork: ctlplane
    ansible:
      ansibleUser: cloud-admin
      ansiblePort: 22
      ansibleVarsFrom:
        - secretRef:
            name: subscription-manager
        - secretRef:
            name: redhat-registry
      ansibleVars:
        edpm_bootstrap_command: |
          rhc_release: 9.4
          rhc_repositories:
            - {name: "*", state: disabled}
            - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
            - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
            - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
            - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
            - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
            - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
        edpm_bootstrap_release_version_package: []
        ...
        neutron_physical_bridge_name: br-ex
        edpm_network_config_template: |
          ---
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: ovs_bridge
            name: {{ neutron_physical_bridge_name }}
            mtu: {{ min_viable_mtu }}
            use_dhcp: false
            dns_servers: {{ ctlplane_dns_nameservers }}
            domain: {{ dns_search_domains }}
            addresses:
            - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
            routes: {{ ctlplane_host_routes }}
            members:
            - type: interface
              name: nic1
              mtu: {{ min_viable_mtu }}
              # force the MAC address of the bridge to this interface
              primary: true
          {% for network in nodeset_networks %}
            - type: vlan
              mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
              vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
              addresses:
              - ip_netmask:
                  {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
              routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
          {% endfor %}
  nodes:
    edpm-networker-0:
      hostName: edpm-networker-0
      networks:
      - name: ctlplane
        subnetName: subnet1
        defaultRoute: true
        fixedIP: 192.168.122.100
      - name: internalapi
        subnetName: subnet1
        fixedIP: 172.17.0.100
      - name: storage
        subnetName: subnet1
        fixedIP: 172.18.0.100
      - name: tenant
        subnetName: subnet1
        fixedIP: 172.19.0.100
      ansible:
        ansibleHost: 192.168.122.100
        ansibleUser: cloud-admin
        ansibleVars:
          fqdn_internal_api: edpm-networker-0.example.com
    edpm-networker-1:
      hostName: edpm-networker-1
      networks:
      - name: ctlplane
        subnetName: subnet1
        defaultRoute: true
        fixedIP: 192.168.122.101
      - name: internalapi
        subnetName: subnet1
        fixedIP: 172.17.0.101
      - name: storage
        subnetName: subnet1
        fixedIP: 172.18.0.101
      - name: tenant
        subnetName: subnet1
        fixedIP: 172.19.0.101
      ansible:
        ansibleHost: 192.168.122.101
        ansibleUser: cloud-admin
        ansibleVars:
          fqdn_internal_api: edpm-networker-1.example.com
Copy to Clipboard Toggle word wrap

The following example OpenStackDataPlaneNodeSet CR creates a node set from pre-provisioned Networker nodes with OVS-DPDK and some node-specific configuration. Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that describes the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.

apiVersion: v1
kind: ConfigMap
metadata:
  name: networker-nodeset-values
  annotations:
    config.kubernetes.io/local-config: "true"
data:
  root_password: cmVkaGF0Cg==
  preProvisioned: false
  baremetalSetTemplate:
    ctlplaneInterface: <control plane interface>
    cloudUserName: cloud-admin
    provisioningInterface: <provisioning network interface>
    bmhLabelSelector:
      app: openstack-networker
    passwordSecret:
      name: baremetalset-password-secret
      namespace: openstack
  ssh_keys:
    # Authorized keys that will have access to the dataplane networkers via SSH
    authorized: <authorized key>
    # The private key that will have access to the dataplane networkers via SSH
    private: <private key>
    # The public key that will have access to the dataplane networkers via SSH
    public: <public key>
  nodeset:
    ansible:
      ansibleUser: cloud-admin
      ansiblePort: 22
      ansibleVars:
        edpm_enable_chassis_gw: true
        ...
       ansibleVarsFrom:
        - secretRef:
            name: subscription-manager
        - secretRef:
            name: redhat-registry
      ansibleVars:
        edpm_bootstrap_command: |
        rhc_release: 9.4
        rhc_repositories:
            - {name: "*", state: disabled}
            - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
            - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
            - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
            - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
            - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
            - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}

          edpm_bootstrap_release_version_package: []
        ...
        edpm_network_config_template: |
          ...
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: interface
            name: nic1
            use_dhcp: false


          - type: interface
            name: nic2
            use_dhcp: false


          - type: ovs_user_bridge
            name: {{ neutron_physical_bridge_name }}
            mtu: {{ min_viable_mtu }}
            use_dhcp: false
            dns_servers: {{ ctlplane_dns_nameservers }}
            domain: {{ dns_search_domains }}
            addresses:
            - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
            routes: {{ ctlplane_host_routes }}
            members:
            - type: ovs_dpdk_port
              rx_queue: 1
              name: dpdk0
              members:
              - type: interface
                name: nic3
        # These vars are for the network config templates themselves and are
        # considered EDPM network defaults.
        neutron_physical_bridge_name: br-ex
        neutron_public_interface_name: nic1
        # edpm_nodes_validation
        edpm_nodes_validation_validate_controllers_icmp: false
        edpm_nodes_validation_validate_gateway_icmp: false
        dns_search_domains: []
        gather_facts: false
        # edpm firewall, change the allowed CIDR if needed
        edpm_sshd_configure_firewall: true
        edpm_sshd_allowed_ranges:
          - 192.168.122.0/24
    networks:
      - defaultRoute: true
        name: ctlplane
        subnetName: subnet1
      - name: internalapi
        subnetName: subnet1
      - name: storage
        subnetName: subnet1
      - name: tenant
        subnetName: subnet1
    nodes:
      edpm-networker-0:
        hostName: edpm-networker-0
    services:
      - bootstrap
      - download-cache
      - reboot-os
      - configure-ovs-dpdk
      - configure-network
      - validate-network
      - install-os
      - configure-os
      - ssh-known-hosts
      - run-os
      - install-certs
      - ovn
      - neutron-metadata
Copy to Clipboard Toggle word wrap

To create Networker nodes with unprovisioned nodes, you must perform the following tasks:

  1. Create a BareMetalHost custom resource (CR) for each bare-metal Networker node.
  2. Define an OpenStackDataPlaneNodeSet CR for the Networker nodes.

Prerequisites

You must create a BareMetalHost custom resource (CR) for each bare-metal Networker node. At a minimum, you must provide the data required to add the bare-metal Networker node on the network so that the remaining installation steps can access the node and perform the configuration.

Note

If you use the ctlplane interface for provisioning, to avoid the kernel rp_filter logic from dropping traffic, configure the DHCP service to use an address range different from the ctlplane address range. This ensures that the return traffic remains on the machine network interface.

Procedure

  1. The Bare Metal Operator (BMO) manages BareMetalHost custom resources (CRs) in the openshift-machine-api namespace by default. Update the Provisioning CR to watch all namespaces:

    $ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'
    Copy to Clipboard Toggle word wrap
  2. If you are using virtual media boot for bare-metal Networker nodes and the nodes are not connected to a provisioning network, you must update the Provisioning CR to enable virtualMediaViaExternalNetwork, which enables bare-metal connectivity through the external network:

    $ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"virtualMediaViaExternalNetwork": true }}'
    Copy to Clipboard Toggle word wrap
  3. Create a file on your workstation that defines the Secret CR with the credentials for accessing the Baseboard Management Controller (BMC) of each bare-metal Networker node in the node set:

    apiVersion: v1
    kind: Secret
    metadata:
      name: edpm-networker-0-bmc-secret
      namespace: openstack
    type: Opaque
    data:
      username: <base64_username>
      password: <base64_password>
    Copy to Clipboard Toggle word wrap
    • Replace <base64_username> and <base64_password> with strings that are base64-encoded. You can use the following command to generate a base64-encoded string:

      $ echo -n <string> | base64
      Copy to Clipboard Toggle word wrap
      Tip

      If you do not want to base64-encode the username and password, you can use the stringData field instead of the data field to set the username and password.

  4. Create a file named bmh_networker_nodes.yaml on your workstation, that defines the BareMetalHost CR for each bare-metal Networker node. The following example creates a BareMetalHost CR with the provisioning method Redfish virtual media:

    apiVersion: metal3.io/v1alpha1
    kind: BareMetalHost
    metadata:
      name: edpm-networker-0
      namespace: openstack
      labels: 
    1
    
        app: openstack-networker
        workload: networker
    spec:
    ...
      bmc:
        address: redfish-virtualmedia+http://192.168.111.1:8000/redfish/v1/Systems/e8efd888-f844-4fe0-9e2e-498f4ab7806d 
    2
    
        credentialsName: edpm-networker-0-bmc-secret 
    3
    
      bootMACAddress: 00:c7:e4:a7:e7:f3
      bootMode: UEFI
      online: false
     [preprovisioningNetworkDataName: <network_config_secret_name>] 
    4
    Copy to Clipboard Toggle word wrap
    1
    Metadata labels, such as app, workload, and nodeName are key-value pairs that provide varying levels of granularity for labelling nodes. You can use these labels when you create an OpenStackDataPlaneNodeSet CR to describe the configuration of bare-metal nodes to be provisioned or to define nodes in a node set.
    2
    The URL for communicating with the node’s BMC controller. For information about BMC addressing for other provisioning methods, see BMC addressing in the RHOCP Deploying installer-provisioned clusters on bare metal guide.
    3
    The name of the Secret CR you created in the previous step for accessing the BMC of the node.
    4
    Optional: The name of the network configuration secret in the local namespace to pass to the pre-provisioning image. The network configuration must be in nmstate format.

    For more information about how to create a BareMetalHost CR, see About the BareMetalHost resource in the RHOCP Postinstallation configuration guide.

  5. Create the BareMetalHost resources:

    $ oc create -f bmh_networker_nodes.yaml
    Copy to Clipboard Toggle word wrap
  6. Verify that the BareMetalHost resources have been created and are in the Available state:

    $ oc get bmh
    NAME         STATE            CONSUMER              ONLINE   ERROR   AGE
    edpm-networker-0   Available      openstack-edpm        true             2d21h
    edpm-networker-1   Available      openstack-edpm        true             2d21h
    ...
    Copy to Clipboard Toggle word wrap

Define an OpenStackDataPlaneNodeSet custom resource (CR) for group of Networker nodes. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR.

You use the nodeTemplate field to configure the common properties to apply to all nodes in an OpenStackDataPlaneNodeSet CR, and the nodeTemplate.nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate.

Tip

For an example OpenStackDataPlaneNodeSet CR that creates a node set from unprovisioned Networker nodes, see Example node set CR for unprovisioned Networker nodes with OVS-DPDK.

Prerequisites

Procedure

  1. Create a file on your workstation named openstack_unprovisioned_node_set.yaml to define the OpenStackDataPlaneNodeSet CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: openstack-data-plane 
    1
    
      namespace: openstack
    spec:
      tlsEnabled: true
      env: 
    2
    
        - name: ANSIBLE_FORCE_COLOR
          value: "True"
    Copy to Clipboard Toggle word wrap
    1
    The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. Update the name in this example to a name that reflects the nodes in the set.
    2
    Optional: A list of environment variables to pass to the pod.
  2. Connect the data plane to the control plane network:

    spec:
      ...
      networkAttachments:
        - ctlplane
    Copy to Clipboard Toggle word wrap
  3. Specify that the nodes in this set are unprovisioned and must be provisioned when creating the resource:

      preProvisioned: false
    Copy to Clipboard Toggle word wrap
  4. Define the baremetalSetTemplate field to describe the configuration of the bare-metal nodes that must be provisioned when creating the resource:

      baremetalSetTemplate:
        deploymentSSHSecret: dataplane-ansible-ssh-private-key-secret
        bmhNamespace: <bmh_namespace>
        cloudUserName: <ansible_ssh_user>
        bmhLabelSelector:
          app: <bmh_label>
        ctlplaneInterface: <interface>
    Copy to Clipboard Toggle word wrap
    • Replace <bmh_namespace> with the namespace defined in the corresponding BareMetalHost CR for the node, for example, openshift-machine-api.
    • Replace <ansible_ssh_user> with the username of the Ansible SSH user, for example, cloud-admin.
    • Replace <bmh_label> with the label defined in the corresponding BareMetalHost CR for the node, for example, openstack-networker. Metadata labels, such as app, workload, and nodeName are key-value pairs that provide varying levels of granularity for labelling nodes. Set the bmhLabelSelector field to select data plane nodes based on labels that match the labels in the corresponding BareMetalHost CR.
    • Replace <interface> with the control plane interface the node connects to, for example, enp6s0.
  5. If you created a custom OpenStackProvisionServer CR, add it to your baremetalSetTemplate definition:

      baremetalSetTemplate:
        ...
        provisionServerName: my-os-provision-server
    Copy to Clipboard Toggle word wrap
  6. Add the SSH key secret that you created to enable Ansible to connect to the data plane nodes:

      nodeTemplate:
        ansibleSSHPrivateKeySecret: <secret-key>
    Copy to Clipboard Toggle word wrap
    • Replace <secret-key> with the name of the SSH key Secret CR you created in <link>[Creating the data plane secrets], for example, dataplane-ansible-ssh-private-key-secret.
  7. Create a Persistent Volume Claim (PVC) in the openstack namespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set the volumeMode to Filesystem and accessModes to ReadWriteOnce. Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and the ansible-runner creates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment.
  8. Enable persistent logging for the data plane nodes:

      nodeTemplate:
        ...
        extraMounts:
          - extraVolType: Logs
            volumes:
            - name: ansible-logs
              persistentVolumeClaim:
                claimName: <pvc_name>
            mounts:
            - name: ansible-logs
              mountPath: "/runner/artifacts"
    Copy to Clipboard Toggle word wrap
    • Replace <pvc_name> with the name of the PVC storage on your RHOCP cluster.
  9. Specify the management network:

      nodeTemplate:
        ...
        managementNetwork: ctlplane
    Copy to Clipboard Toggle word wrap
  10. Specify the Secret CRs used to source the usernames and passwords to register the operating system of the nodes that are not registered to the Red Hat Customer Portal, and enable repositories for your nodes. The following example demonstrates how to register your nodes to Red Hat Content Delivery Network (CDN). For information about how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts.

      nodeTemplate:
        ansible:
          ansibleUser: cloud-admin 
    1
    
          ansiblePort: 22
          ansibleVarsFrom:
            - secretRef:
                name: subscription-manager
            - secretRef:
                name: redhat-registry
          ansibleVars: 
    2
    
            rhc_release: 9.4
            rhc_repositories:
                - {name: "*", state: disabled}
                - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled}
                - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled}
                - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled}
                - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled}
                - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled}
                - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled}
            edpm_bootstrap_release_version_package: []
    Copy to Clipboard Toggle word wrap
    1
    The user associated with the secret you created in <link>[Creating the data plane secrets].
    2
    The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/.

    For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273. For information about how to log into registry.redhat.io, see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6.

  11. Add the network configuration template to apply to your data plane nodes.

      nodeTemplate:
        ...
        ansible:
          ...
           ansiblePort: 22
          ansibleUser: cloud-admin
          ansibleVars:
            ...
            edpm_enable_chassis_gw: true
            edpm_network_config_nmstate: true
            ...
            neutron_physical_bridge_name: br-ex
            neutron_public_interface_name: eth0
            edpm_network_config_update: false 
    1
    Copy to Clipboard Toggle word wrap
    1
    When deploying a node set for the first time, set the edpm_network_config_update variable to false. When updating or adopting a node set, set edpm_network_config_update to true.
    Important

    After an update or an adoption, you must reset edpm_network_config_update to false. Otherwise, the nodes could lose network access. Whenever edpm_network_config_update is true, the updated network configuration is reapplied every time an OpenStackDataPlaneDeployment CR is created that includes the configure-network service that is a member of the servicesOverride list.

    The following example applies a VLANs network configuration to a set of the data plane Networker nodes with DPDK:

            edpm_network_config_template: |
              ...
              {% set mtu_list = [ctlplane_mtu] %}
              {% for network in nodeset_networks %}
              {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
              {%- endfor %}
              {% set min_viable_mtu = mtu_list | max %}
              network_config:
              - type: ovs_user_bridge
                name: {{ neutron_physical_bridge_name }}
                mtu: {{ min_viable_mtu }}
                use_dhcp: false
                dns_servers: {{ ctlplane_dns_nameservers }}
                domain: {{ dns_search_domains }}
                addresses:
                - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
                routes: {{ ctlplane_host_routes }}
                members:
                - type: ovs_dpdk_port
                  driver: mlx5_core
                  name: dpdk0
                  mtu: {{ min_viable_mtu }}
                  members:
                  - type: sriov_vf
                    device: nic6
                    vfid: 0
                - type: interface
                  name: nic1
                  mtu: {{ min_viable_mtu }}
                  # force the MAC address of the bridge to this interface
                  primary: true
              {% for network in nodeset_networks %}
                - type: vlan
                  mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
                  vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
                  addresses:
                  - ip_netmask:
                      {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
                  routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
              {% endfor %}
    Copy to Clipboard Toggle word wrap

    The following example applies a VLANs network configuration to a set of data plane Networker nodes without DPDK:

    edpm_network_config_template: |
              …---
              {% set mtu_list = [ctlplane_mtu] %}
              {% for network in nodeset_networks %}
              {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
              {%- endfor %}
              {% set min_viable_mtu = mtu_list | max %}
              network_config:
                - type: ovs_bridge
                  name: {{ neutron_physical_bridge_name }}
                  mtu: {{ min_viable_mtu }}
                  use_dhcp: false
                  dns_servers: {{ ctlplane_dns_nameservers }}
                  domain: {{ dns_search_domains }}
                  addresses:
                    - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
                  routes: {{ ctlplane_host_routes }}
                  members:
                    - type: interface
                      name: nic2
                      mtu: {{ min_viable_mtu }}
                      # force the MAC address of the bridge to this interface
                      primary: true
              {% for network in nodeset_networks %}
                    - type: vlan
                      mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
                      vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
                      addresses:
                        - ip_netmask: >-
                            {{
                              lookup('vars', networks_lower[network] ~ '_ip')
                            }}/{{
                              lookup('vars', networks_lower[network] ~ '_cidr')
                            }}
                      routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
              {% endfor %}
    Copy to Clipboard Toggle word wrap

    For more information about data plane network configuration, see Customizing data plane networks in Configuring network services.

  12. Add the common configuration for the set of nodes in this group under the nodeTemplate section. Each node in this OpenStackDataPlaneNodeSet inherits this configuration. For information about the properties you can use to configure common node attributes, see OpenStackDataPlaneNodeSet CR spec properties in the Deploying Red Hat OpenStack Services on OpenShift guide.
  13. Define each node in this node set:

      nodes:
        edpm-networker-0: 
    1
    
          hostName: networker-0
          networks: 
    2
    
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.100 
    3
    
          - name: internalapi
            subnetName: subnet1
            fixedIP: 172.17.0.100
          - name: storage
            subnetName: subnet1
            fixedIP: 172.18.0.100
          - name: tenant
            subnetName: subnet1
            fixedIP: 172.19.0.100
          ansible:
            ansibleHost: 192.168.122.100
            ansibleUser: cloud-admin
            ansibleVars:
              fqdn_internal_api: edpm-networker-0.example.com
          bmhLabelSelector: 
    4
    
            nodeName: edpm-networker-0
        edpm-networker-1:
          hostName: edpm-networker-1
          networks:
          - name: ctlplane
            subnetName: subnet1
            defaultRoute: true
            fixedIP: 192.168.122.101
          - name: internalapi
            subnetName: subnet1
            fixedIP: 172.17.0.101
          - name: storage
            subnetName: subnet1
            fixedIP: 172.18.0.101
          - name: tenant
            subnetName: subnet1
            fixedIP: 172.19.0.101
          ansible:
            ansibleHost: 192.168.122.101
            ansibleUser: cloud-admin
            ansibleVars:
              fqdn_internal_api: edpm-networker-1.example.com
          bmhLabelSelector:
            nodeName: edpm-networker-1
    Copy to Clipboard Toggle word wrap
    1
    The node definition reference, for example, edpm-networker-0. Each node in the node set must have a node definition.
    2
    Defines the IPAM and the DNS records for the node.
    3
    Specifies a predictable IP address for the network that must be in the allocation range defined for the network in the NetConfig CR.
    4
    Optional: The BareMetalHost CR metadata label that selects the BareMetalHost CR for the data plane node. The label can be any label that is defined for the BareMetalHost CR. The label is used with the bmhLabelSelector label configured in the baremetalSetTemplate definition to select the BareMetalHost for the node.
    Note
    • Nodes defined within the nodes section can configure the same Ansible variables that are configured in the nodeTemplate section. Where an Ansible variable is configured for both a specific node and within the nodeTemplate section, the node-specific values override those from the nodeTemplate section.
    • You do not need to replicate all the nodeTemplate Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node.
    • Many ansibleVars include edpm in the name, which stands for "External Data Plane Management".

    For information about the properties you can use to configure common node attributes, see OpenStackDataPlaneNodeSet CR spec properties in the Deploying Red Hat OpenStack Services on OpenShift guide.

  14. Save the openstack_unprovisioned_node_set.yaml definition file.
  15. Create the data plane resources:

    $ oc create --save-config -f openstack_unprovisioned_node_set.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  16. Verify that the data plane resources have been created by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m
    Copy to Clipboard Toggle word wrap

    When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error.

    For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

  17. Verify that the Secret resource was created for the node set:

    $ oc get secret -n openstack | grep openstack-data-plane
    dataplanenodeset-openstack-data-plane Opaque 1 3m50s
    Copy to Clipboard Toggle word wrap
  18. Verify that the nodes have transitioned to the provisioned state:

    $ oc get bmh
    NAME            STATE         CONSUMER               ONLINE   ERROR   AGE
    edpm-networker-0  provisioned   openstack-data-plane   true             3d21h
    Copy to Clipboard Toggle word wrap
  19. Verify that the services were created:

    $ oc get openstackdataplaneservice -n openstack
    NAME                    AGE
    bootstrap               8m40s
    ceph-client             8m40s
    ceph-hci-pre            8m40s
    configure-network       8m40s
    configure-os            8m40s
    ...
    Copy to Clipboard Toggle word wrap

The following example OpenStackDataPlaneNodeSet CR creates a node set from unprovisioned Networker nodes with OVS-DPDK and some node-specific configuration. The unprovisioned Networker nodes are provisioned when the node set is created. Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters.

apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
  name: networker-nodes
  namespace: openstack

 services:
  - redhat
  - bootstrap
  - download-cache
  - reboot-os
  - configure-ovs-dpdk
  - configure-network
  - validate-network
  - install-os
  - configure-os
  - ssh-known-hosts
  - run-os
  - install-certs
  - ovn
  - neutron-metadata

  nodeTemplate:
    ansible:
      ansibleVars:
        edpm_enable_chassis_gw: true
        edpm_kernel_args: default_hugepagesz=1GB hugepagesz=1G hugepages=64 iommu=pt
          intel_iommu=on tsx=off isolcpus=2-47,50-95
        edpm_network_config_nmstate: true
        ...
        edpm_network_config_template: |
          ...
          {% set mtu_list = [ctlplane_mtu] %}
          {% for network in nodeset_networks %}
          {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}
          {%- endfor %}
          {% set min_viable_mtu = mtu_list | max %}
          network_config:
          - type: interface
            name: nic1
            use_dhcp: false

          - type: sriov_pf
            name: nic6
            mtu: 9000
            numvfs: 2
            use_dhcp: false
            defroute: false
            nm_controlled: true
            hotplug: true
            promisc: false

          - type: ovs_user_bridge
            name: {{ neutron_physical_bridge_name }}
            mtu: {{ min_viable_mtu }}
            use_dhcp: false
            dns_servers: {{ ctlplane_dns_nameservers }}
            domain: {{ dns_search_domains }}
            addresses:
            - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
            routes: {{ ctlplane_host_routes }}
            members:
            - type: ovs_dpdk_port
              driver: mlx5_core
              name: dpdk0
              mtu: {{ min_viable_mtu }}
              members:
              - type: sriov_vf
                device: nic6
                vfid: 0

          - type: linux_bond
            name: bond_api
            use_dhcp: false
            bonding_options: "mode=active-backup"
            dns_servers: {{ ctlplane_dns_nameservers }}
            members:
            - type: sriov_vf
              device: nic6
              driver: mlx5_core
              mtu: {{ min_viable_mtu }}
              spoofcheck: false
              promisc: false
              vfid: 1
              primary: true

          - type: vlan
            vlan_id: {{ lookup('vars', networks_lower['internalapi'] ~ '_vlan_id') }}
            device: bond_api
            addresses:
            - ip_netmask: {{ lookup('vars', networks_lower['internalapi'] ~ '_ip') }}/{{ lookup('vars', networks_lower['internalapi'] ~ '_cidr') }}

          - type: ovs_user_bridge
            name: br-link0
            use_dhcp: false
            ovs_extra: "set port br-link0 tag={{ lookup('vars', networks_lower['tenant'] ~ '_vlan_id') }}"
            addresses:
            - ip_netmask: {{ lookup('vars', networks_lower['tenant'] ~ '_ip') }}/{{ lookup('vars', networks_lower['tenant'] ~ '_cidr')}}
            members:
            - type: ovs_dpdk_bond
              name: dpdkbond0
              mtu: 9000
              rx_queue: 1
              ovs_extra: "set port dpdkbond0 bond_mode=balance-slb"
              members:
              - type: ovs_dpdk_port
                name: dpdk1
                members:
                - type: interface
                  name: nic4
              - type: ovs_dpdk_port
                name: dpdk2
                members:
                - type: interface
                  name: nic5

          - type: ovs_user_bridge
            name: br-link1
            use_dhcp: false
            members:
            - type: ovs_dpdk_bond
              name: dpdkbond1
              mtu: 9000
              rx_queue: 1
              ovs_extra: "set port dpdkbond1 bond_mode=balance-slb"
              members:
              - type: ovs_dpdk_port
                name: dpdk3
                members:
                - type: interface
                  name: nic2
              - type: ovs_dpdk_port
                name: dpdk4
                members:
                - type: interface
                  name: nic3
        edpm_ovn_bridge_mappings:
        - access:br-ex
        - dpdkmgmt:br-link0
        - dpdkdata0:br-link1
        edpm_ovs_dpdk_memory_channels: 4
        edpm_ovs_dpdk_pmd_core_list: 2,3,50,51
        edpm_ovs_dpdk_socket_memory: 4096,4096
        edpm_tuned_isolated_cores: 2-47,50-95
        edpm_tuned_profile: cpu-partitioning
        neutron_physical_bridge_name: br-ex
        neutron_public_interface_name: eth0
Copy to Clipboard Toggle word wrap

5.5. Deploying the data plane

You use the OpenStackDataPlaneDeployment CRD to configure the services on the data plane nodes and deploy the data plane. You control the execution of Ansible on the data plane by creating OpenStackDataPlaneDeployment custom resources (CRs). Each OpenStackDataPlaneDeployment CR models a single Ansible execution. When the OpenStackDataPlaneDeployment successfully completes execution, it does not automatically execute the Ansible again, even if the OpenStackDataPlaneDeployment or related OpenStackDataPlaneNodeSet resources are changed. To start another Ansible execution, you must create another OpenStackDataPlaneDeployment CR.

Create an OpenStackDataPlaneDeployment (CR) that deploys each of your OpenStackDataPlaneNodeSet CRs.

Procedure

  1. Create a file on your workstation named openstack_data_plane_deploy.yaml to define the OpenStackDataPlaneDeployment CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: data-plane-deploy 
    1
    
      namespace: openstack
    Copy to Clipboard Toggle word wrap
    1
    The OpenStackDataPlaneDeployment CR name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the node sets in the deployment.
  2. Add all the OpenStackDataPlaneNodeSet CRs that you want to deploy:

    spec:
      nodeSets:
        - openstack-data-plane
        - <nodeSet_name>
        - ...
        - <nodeSet_name>
    Copy to Clipboard Toggle word wrap
    • Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment.
  3. Save the openstack_data_plane_deploy.yaml deployment file.
  4. Deploy the data plane:

    $ oc create -f openstack_data_plane_deploy.yaml -n openstack
    Copy to Clipboard Toggle word wrap

    You can view the Ansible logs while the deployment executes:

    $ oc get pod -l app=openstackansibleee -w
    $ oc logs -l app=openstackansibleee -f --max-log-requests 10
    Copy to Clipboard Toggle word wrap

    If the oc logs command returns an error similar to the following error, increase the --max-log-requests value:

    error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
    Copy to Clipboard Toggle word wrap
  5. Verify that the data plane is deployed:

    $ oc get openstackdataplanedeployment -n openstack
    NAME             	STATUS   MESSAGE
    data-plane-deploy   True     Setup Complete
    
    
    $ oc get openstackdataplanenodeset -n openstack
    NAME             	STATUS   MESSAGE
    openstack-data-plane   True     NodeSet Ready
    Copy to Clipboard Toggle word wrap

    For information about the meaning of the returned status, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift

    If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment in the Deploying Red Hat OpenStack Services on OpenShift guide.

Volver arriba
Red Hat logoGithubredditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar. Explore nuestras recientes actualizaciones.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

Theme

© 2025 Red Hat