Chapter 4. Creating the data plane
The Red Hat OpenStack Services on OpenShift (RHOSO) data plane consists of RHEL 9.4 nodes. Use the OpenStackDataPlaneNodeSet
custom resource definition (CRD) to create the custom resources (CRs) that define the nodes and the layout of the data plane. You can use pre-provisioned nodes, or provision bare-metal nodes as part of the data plane creation and deployment process.
To create and deploy a data plane, you must perform the following tasks:
-
Create a
Secret
CR for Ansible to use to execute commands on the data plane nodes. -
Create the
OpenStackDataPlaneNodeSet
CRs that define the nodes and layout of the data plane. -
Create the
OpenStackDataPlaneDeployment
CRs that trigger the Ansible execution to deploy and configure software.
4.1. Prerequisites
- A functional control plane, created with the OpenStack Operator. For more information, see Creating the control plane.
-
Pre-provisioned nodes must be configured with an SSH public key in the
$HOME/.ssh/authorized_keys
file for a user with passwordlesssudo
privileges. For bare-metal nodes that are provisioned when creating the
OpenStackDataPlaneNodeSet
resource:- Cluster Baremetal Operator (CBO) is installed and configured for provisioning. For more information, see Provisioning bare-metal data plane nodes.
-
A
BareMetalHost
CR is registered and inspected for each bare-metal data plane node. Each bare-metal node must be in theAvailable
state after inspection. For more information about configuring bare-metal nodes, see Bare metal configuration in the RHOCP Postinstallation configuration guide. Update the
edpm-hardened-uefi-rhel9:18.0.0
image in the operator bundles from the default version 10 to version 9:$ oc edit csv openstack-operator.v0.1.3 apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion ... value: registry.redhat.io/rhoso-beta/edpm-hardened-uefi-rhel9:18.0.0-9 value: edpm-hardened-uefi.qcow2 ...
$ oc edit csv openstack-baremetal-operator.v0.1.3 apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion ... value: registry.redhat.io/rhoso-beta/edpm-hardened-uefi-rhel9:18.0.0-9 value: edpm-hardened-uefi.qcow2 ...
-
You are logged on to a workstation that has access to the RHOCP cluster as a user with
cluster-admin
privileges.
4.2. Creating the SSH key secrets
You must generate SSH keys and create an SSH key Secret
custom resource (CR) for each key to enable the following functionality:
- You must generate an SSH key to enable Ansible to manage the RHEL nodes on the data plane. Ansible executes commands with this user and key.
- You must generate an SSH key to enable migration of instances between Compute nodes.
The Secret
CRs are used by the data plane nodes to enable secure access between nodes.
Procedure
Create the SSH key pair for Ansible:
$ KEY_FILE_NAME=<key_file_name> $ ssh-keygen -f $KEY_FILE_NAME -N "" -t rsa -b 4096
-
Replace
<key_file_name>
with the name to use for the key pair.
-
Replace
Create the
Secret
CR for Ansible and apply it to the cluster:$ SECRET_NAME=<secret_name> $ oc create secret generic $SECRET_NAME \ --save-config \ --dry-run=client \ [--from-file=authorized_keys=$KEY_FILE_NAME.pub \] --from-file=ssh-privatekey=$KEY_FILE_NAME \ --from-file=ssh-publickey=$KEY_FILE_NAME.pub \ -n openstack \ -o yaml | oc apply -f-
-
Replace
<secret_name>
with the name you want to use for theSecret
resource. -
Include the
--from-file=authorized_keys
option for bare-metal nodes that must be provisioned when creating the data plane.
-
Replace
Create the SSH key pair for instance migration:
$ ssh-keygen -f ./id -t ecdsa-sha2-nistp521 -N ''
Create the
Secret
CR for migration and apply it to the cluster:$ oc create secret generic nova-migration-ssh-key \ --from-file=ssh-privatekey=id \ --from-file=ssh-publickey=id.pub \ -n openstack \ -o yaml | oc apply -f-
Verify that the
Secret
CRs are created:$ oc describe secret $SECRET_NAME
4.3. Creating a set of data plane nodes
You use the OpenStackDataPlaneNodeSet
CRD to define the data plane and the data plane nodes. An OpenStackDataPlaneNodeSet
custom resource (CR) represents a set of nodes of the same type that have similar configuration, comparable to the concept of a "role" in a director-deployed Red Hat OpenStack Services on OpenShift (RHOSO) environment.
Create an OpenStackDataPlaneNodeSet
CR for each logical grouping of nodes in your data plane, for example, nodes grouped by hardware, location, or networking. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet
CR. Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1
. If your control plane includes additional Compute cells, you must specify the cell to which the node set is connected.
Procedure
Copy the sample
OpenStackDataPlaneNodeSet
CR and save it to a file namedopenstack-edpm.yaml
on your workstation:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-edpm spec: tlsEnabled: true env: - name: ANSIBLE_FORCE_COLOR value: "True" services: - bootstrap - download-cache - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - reboot-os - install-certs - ovn - neutron-metadata - libvirt - nova - telemetry preProvisioned: true networkAttachments: - ctlplane nodes: edpm-compute-0: hostName: edpm-compute-0 ansible: ansibleHost: 192.168.122.100 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 nodeTemplate: ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret ansible: ansibleVarsFrom: - prefix: edpm_ configMapRef: name: network-config-template - prefix: neutron_ configMapRef: name: neutron-edpm # CHANGEME -- see https://access.redhat.com/solutions/253273 # - prefix: subscription_manager_ # secretRef: # name: subscription-manager # - prefix: registry_ # secretRef: # name: redhat-registry ansibleVars: # CHANGEME -- see https://access.redhat.com/solutions/253273 # edpm_bootstrap_command: | # subscription-manager register --username {{ subscription_manager_username }} --password {{ subscription_manager_password }} # podman login -u {{ registry_username }} -p {{ registry_password }} registry.redhat.io edpm_nodes_validation_validate_controllers_icmp: false edpm_nodes_validation_validate_gateway_icmp: false gather_facts: false enable_debug: false # edpm firewall, change the allowed CIDR if needed edpm_sshd_allowed_ranges: ['192.168.122.0/24']
The sample
OpenStackDataPlaneNodeSet
CR is connected tocell1
by default. If you added additional Compute cells to the control plane and you want to connect the node set to one of the other cells, then you must create a custom service for the node set that includes theSecret
CR for the cell:Create a custom
nova
service that includes theSecret
CR for the cell to connect to:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-cell-custom spec: label: dataplane-deployment-custom-service playbook: osp.edpm.nova ... secrets: - nova-cell2-compute-config 1
- 1
- The
Secret
CR generated by the control plane for the cell.
For information about how to create a custom service, see Creating a custom service.
Replace the
nova
service in yourOpenStackDataPlaneNodeSet
CR with your customnova
service:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-edpm-ipam spec: services: - download-cache - bootstrap - configure-network - validate-network - install-os - configure-os - run-os - ovn - neutron-metadata - libvirt - nova-cell-custom - telemetry
NoteDo not change the order of the default services.
If you are deploying multiple nodesets, ensure that you add the
ssh-known-hosts
service to exactly one node set of the deployment because this service is deployed globally:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-edpm-ipam spec: services: - download-cache - bootstrap - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - ovn - neutron-metadata - libvirt - nova-cell-custom - telemetry
Update the
Secret
to the SSH key secret that you created to enable Ansible to connect to the data plane nodes:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-edpm-ipam spec: nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>
-
Replace
<secret-key>
with the name of the SSH keySecret
CR you created in Creating the SSH key secrets, for example,dataplane-ansible-ssh-private-key-secret
.
-
Replace
- Optional: Configure the node set for a Compute feature or workload. For more information, see Configuring a node set for a Compute feature or workload.
Optional: The sample
OpenStackDataPlaneNodeSet
CR that you copied includes the minimum common configuration required for a set of nodes in this group under thenodeTemplate
section. Each node in thisOpenStackDataPlaneNodeSet
inherits this configuration. You can edit the configured values as required, and you can add additional configuration.For information about the properties you can use to configure common node attributes, see
OpenStackDataPlaneNodeSet
CR properties.For example
OpenStackDataPlaneNodeSet
CRnodeTemplate
definitions, see ExampleOpenStackDataPlaneNodeSet
CR for pre-provisioned nodes or ExampleOpenStackDataPlaneNodeSet
CR for bare-metal nodes.Optional: The sample
OpenStackDataPlaneNodeSet
CR you copied applies the single NIC VLANs network configuration by default to the data plane nodes. You can edit the template that is applied. For example, to configure the data plane for multiple NICS, copy the contents of theroles/edpm_network_config/templates/multiple_nics/multiple_nics.j2
file and add it to youropenstack-edpm.yaml
file:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-edpm-ipam spec: ... nodeTemplate: ... ansible: ansibleVars: edpm_network_config_template: | --- network_config: - type: interface name: nic1 mtu: {{ ctlplane_mtu }} dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} routes: {{ ctlplane_host_routes }} use_dhcp: false addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }} {% for network in networks_all if network not in networks_skip_config %} {% if network not in ["External", "Tenant"] and network in role_networks %} - type: interface name: nic{{ loop.index +1 }} mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} use_dhcp: false addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% elif network in role_networks or 'external_bridge' in role_tags %} - type: ovs_bridge {% if network == 'External' %} name: {{ neutron_physical_bridge_name }} {% else %} name: {{ 'br-' ~ networks_lower[network] }} {% endif %} mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} dns_servers: {{ ctlplane_dns_nameservers }} use_dhcp: false addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} members: - type: interface name: nic{{loop.index + 1}} mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} use_dhcp: false primary: true {% endif %} {% endfor %}
You can copy a sample template from https://github.com/openstack-k8s-operators/dataplane-operator/tree/main/config/samples/nic-config-samples.
Register the operating system of the nodes that are not registered to the Red Hat Customer Portal, and enable repositories for your nodes:
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-edpm-ipam spec: preProvisioned: True ... nodeTemplate: ansible: ... ansibleVars: edpm_bootstrap_command: | subscription-manager register --username <subscription_manager_username> --password <subscription_manager_password> subscription-manager release --set=9.4 subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms --enable=rhoso-18-beta-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms podman login -u <registry_username> -p <registry_password> registry.redhat.io
-
Replace
<subscription_manager_username>
with the applicable user name. -
Replace
<subscription_manager_password>
with the applicable password. -
Replace
<registry_username>
with the applicable user name. -
Replace
<registry_password>
with the applicable password.
For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273. For information about how to log into
registry.redhat.io
, see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6.-
Replace
If your nodes are not pre-provisioned, you must configure the bare-metal template:
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-edpm-ipam spec: preProvisioned: false baremetalSetTemplate: deploymentSSHSecret: dataplane-ansible-ssh-private-key-secret bmhNameSpace: openshift-machine-api 1 cloudUserName: <ansible_ssh_user> bmhLabelSelector: app: openstack 2 ctlplaneInterface: enp1s0 dnsSearchDomains: - osptest.openstack.org
For more information about provisioning bare-metal nodes, see Provisioning bare-metal data plane nodes.
NoteThe BMO manages
BareMetalHost
CRs in theopenshift-machine-api
namespace by default. If the BMO must also manageBareMetalHost
CRs in other namespaces, you must update theProvisioning
CR to watch all namespaces:$ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'
Optional: The sample
OpenStackDataPlaneNodeSet
CR that you copied includes default node configurations in thenodes
section. If necessary, you can add additional nodes and edit the configured values. For example, to add node-specific Ansible variables that customize the node, add the following configuration to youropenstack-edpm.yaml
file:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-edpm-ipam spec: ... nodes: edpm-compute-0: 1 hostName: edpm-compute-0 networks: 2 - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 3 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.100 - name: storage subnetName: subnet1 fixedIP: 172.18.0.100 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.100 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars: 4 fqdn_internal_api: edpm-compute-0.example.com edpm-compute-1: hostName: edpm-compute-1 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-1.example.com
Note-
Nodes defined within the
nodes
section can configure the same Ansible variables that are configured in thenodeTemplate
section. Where an Ansible variable is configured for both a specific node and within thenodeTemplate
section, the node-specific values override those from thenodeTemplate
section. -
You do not need to replicate all the
nodeTemplate
Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node.
For information about the properties you can use to configure node attributes, see
OpenStackDataPlaneNodeSet
CR properties. For exampleOpenStackDataPlaneNodeSet
CRnodes
definitions, see ExampleOpenStackDataPlaneNodeSet
CR for pre-provisioned nodes or ExampleOpenStackDataPlaneNodeSet
CR for bare-metal nodes.-
Nodes defined within the
Optional: Customize the container images used by the
edpm-ansible
roles. The following example shows the default images:spec: ... nodeTemplate: ... ansible: ... ansibleVars: edpm_iscsid_image: "registry.redhat.io/rhoso-beta/openstack-iscsid-rhel9:18.0" edpm_logrotate_crond_image: "registry.redhat.io/rhoso-beta/openstack-cron-rhel9:18.0" edpm_ovn_controller_agent_image: "registry.redhat.io/rhoso-beta/openstack-frr-rhel9:18.0" edpm_ovn_metadata_agent_image: "registry.redhat.io/rhoso-beta/openstack-neutron-metadata-agent-ovn-rhel9:18.0" edpm_frr_image: "registry.redhat.io/rhoso-beta/openstack-frr-rhel9:18.0" edpm_ovn_bgp_agent_image: "registry.redhat.io/rhoso-beta/openstack-ovn-bgp-agent-rhel9:18.0" telemetry_node_exporter_image: "redhat.registry.io/prometheus/node-exporter:v1.5.0 edpm_libvirt_image: "registry.redhat.io/rhoso-beta/openstack-nova-libvirt-rhel9:18.0" edpm_nova_compute_image: "registry.redhat.io/rhoso-beta/openstack-nova-compute-rhel9:18.0" edpm_neutron_sriov_image: "registry.redhat.io/rhoso-beta/openstack-neutron-sriov-agent-rhel9:18.0" edpm_multipathd_image: "registry.redhat.io/rhoso-beta/openstack-multipathd-rhel9:18.0"
-
Save the
openstack-edpm.yaml
definition file. Create the data plane resources:
$ oc create -f openstack-edpm.yaml
Verify that the data plane resources have been created:
$ oc get openstackdataplanenodeset NAME STATUS MESSAGE openstack-edpm-ipam False Deployment not started
Verify that the
Secret
resource was created for the node set:$ oc get secret | grep openstack-edpm-ipam dataplanenodeset-openstack-edpm-ipam Opaque 1 3m50s
Verify the services were created:
$ oc get openstackdataplaneservice NAME AGE configure-network 6d7h configure-os 6d6h install-os 6d6h run-os 6d6h validate-network 6d6h ovn 6d6h libvirt 6d6h nova 6d6h telemetry 6d6h
4.4. Data plane services
A data plane service is an Ansible execution that manages the installation, configuration, and execution of a software deployment on data plane nodes. Each service is a resource instance of the OpenStackDataPlaneService
CRD.
The DataPlane Operator provides core services that are deployed by default on data plane nodes. If the services
field is omitted from the OpenStackDataPlaneNodeSet
specification, then the following services are applied by default in the following order:
services: - configure-network - validate-network - install-os - configure-os - run-os - ovn - libvirt - nova - telemetry
The DataPlane Operator also includes the following services that are not enabled by default:
Service | Description |
---|---|
|
Include this service to configure data plane nodes as clients of a Red Hat Ceph Storage server. Include between the apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet spec: ... nodeTemplate: extraMounts: - extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true |
| Include this service to prepare data plane nodes to host Red Hat Ceph Storage in an HCI configuration. For more information, see assembly_configuring-a-hyperconverged-infrastructure-environment[Configuring a Hyperconverged Infrastructure environment]. |
| Include this service to run a Neutron DHCP agent on the data plane nodes. |
| Include this service to run the Neutron OVN Metadata agent on the data plane nodes. This agent is required to provide metadata services to the Compute nodes. |
| Include this service to run the Neutron OVN agent on the data plane nodes. This agent is required to provide QoS to hardware offloaded ports on the Compute nodes. |
| Include this service to run a Neutron SR-IOV NIC agent on the data plane nodes. |
For more information about the available default services, see https://github.com/openstack-k8s-operators/dataplane-operator/tree/main/config/services.
You can enable and disable services for an OpenStackDataPlaneNodeSet
resource.
Do not change the order of the default service deployments.
You can use the OpenStackDataPlaneService
CRD to create custom services that you can deploy on your data plane nodes. You add your custom services to the default list of services where the service must be executed. For more information, see Creating a custom service.
You can view the details of a service by viewing the YAML representation of the resource:
$ oc get openstackdataplaneservice configure-network -o yaml
4.4.1. Creating a custom service
You can use the OpenStackDataPlaneService
CRD to create custom services to deploy on your data plane nodes.
Do not create a custom service with the same name as one of the default services. If a custom service name matches a default service name, the default service values overwrite the custom service values during OpenStackDataPlaneNodeSet
reconciliation.
You specify the Ansible execution for your service with either an Ansible playbook or by including the free-form play contents directly in the spec
section of the service.
You cannot use both an Ansible playbook and an Ansible play in the same service.
Procedure
Create an
OpenStackDataPlaneService
CR and save it to a YAML file on your workstation, for examplecustom-service.yaml
:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: custom-service spec: label: dataplane-deployment-custom-service
Specify the Ansible commands to create the custom service, by referencing an Ansible playbook or by including the Ansible play in the
spec
:Specify the Ansible playbook to use:
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: custom-service spec: label: dataplane-deployment-custom-service playbook: osp.edpm.configure_os
For information about how to create an Ansible playbook, see Creating a playbook.
Specify the Ansible play as a string that uses Ansible playbook syntax:
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: custom-service spec: label: dataplane-deployment-custom-service play: | - hosts: all tasks: - name: Hello World! shell: "echo Hello World!" register: output - name: Show output debug: msg: "{{ output.stdout }}" - name: Hello World role import_role: hello_world
-
Optional: To override the default container image used by the
ansible-runner
execution environment with a custom image that uses additional Ansible content for a custom service, build and include a customansible-runner
image. For information, see Building a customansible-runner
image. - Optional: Designate and configure a node set for a Compute feature or workload. For more information, see Configuring a node set for a Compute feature or workload.
Optional: Specify the names of
Secret
resources to use to pass secrets into theOpenStackAnsibleEE
job:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: custom-service spec: ... play: | ... secrets: - hello-world-secret-0 - hello-world-secret-1
A mount is created for each
secret
in theOpenStackAnsibleEE
pod with a filename that matches thesecret
value. The mounts are created under/var/lib/openstack/configs/<service name>
.Create the custom service:
$ oc apply -f custom-service.yaml
Verify that the custom service is created:
$ oc get openstackdataplaneservice <custom_service_name> -o yaml
4.4.2. Configuring a node set for a Compute feature or workload
You can designate a node set for a particular Compute feature or workload. To designate and configure a node set for a feature, complete the following tasks:
-
Create a
ConfigMap
CR to configure the Compute nodes. -
Create a custom
nova
service for the feature that runs theosp.edpm.nova
playbook. -
Include the
ConfigMap
CR in the customnova
service.
Procedure
Create
ConfigMap
CR to configure the Compute nodes. For example, to enable CPU pinning on the Compute nodes, create the followingConfigMap
object:apiVersion: v1 kind: ConfigMap metadata: name: nova-cpu-pinning-configmap namespace: openstack data: 25-nova-cpu-pinning.conf: | [compute] cpu_shared_set = 2,6 cpu_dedicated_set = 1,3,5,7
When the service is deployed it adds the configuration to
etc/nova/nova.conf.d/
in thenova_compute
container.For more information on creating
ConfigMap
objects, see Creating and using config maps.TipYou can use a
Secret
to create the custom configuration instead if the configuration includes sensitive information, such as passwords or certificates that are required for certification.-
Create a custom
nova
service for the feature. For information about how to create a custom service, see Creating a custom service. Add the
ConfigMap
CR to the customnova
service:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-cpu-pinning-service spec: label: dataplane-deployment-custom-service playbook: osp.edpm.nova configMaps: - nova-cpu-pinning-configmap
Specify the
Secret
CR for the cell that the node set that runs this service connects to:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-cpu-pinning-service spec: label: dataplane-deployment-custom-service playbook: osp.edpm.nova configMaps: - nova-cpu-pinning-configmap secrets: - nova-migration-ssh-key - nova-cell1-compute-config
4.4.3. Building a custom ansible-runner
image
You can override the default container image used by the ansible-runner
execution environment with your own custom image when you need additional Ansible content for a custom service.
Procedure
Create a
Containerfile
that adds the custom content to the default image:FROM quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest COPY my_custom_role /usr/share/ansible/roles/my_custom_role
Build and push the image to a container registry:
$ podman build -t quay.io/example_user/my_custom_image:latest . $ podman push quay.io/example_user/my_custom_role:latest
Specify your new container image as the image that the
ansible-runner
execution environment must use to add the additional Ansible content that your custom service requires, such as Ansible roles or modules:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: custom-service spec: label: dataplane-deployment-custom-service openStackAnsibleEERunnerImage: quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest 1 play: |
- 1
- Your container image that the
ansible-runner
execution environment uses to execute Ansible.
4.5. Deploying the data plane
You use the OpenStackDataPlaneDeployment
CRD to configure the services on the data plane nodes and deploy the data plane. Create an OpenStackDataPlaneDeployment
custom resource (CR) that deploys each of your OpenStackDataPlaneNodeSet
CRs.
Procedure
Create a file called
libvirt-secret.yaml
and add contents similar to the following:apiVersion: v1 data: LibvirtPassword: cGFzc3dvcmQ= kind: Secret metadata: name: libvirt-secret namespace: openstack type: Opaque
Create the secret:
oc apply -f libvirt-secret.yaml
-
Copy the sample
OpenStackDataPlaneDeployment
CR from https://github.com/openstack-k8s-operators/dataplane-operator/blob/main/config/samples/dataplane_v1beta1_openstackdataplanedeployment.yaml and save it to a file namedopenstack-edpm-deploy.yaml
on your workstation. Optional: Update
nodeSets
to include all theOpenStackDataPlaneNodeSet
CRs that you want to deploy:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-edpm-ipam spec: nodeSets: - openstack-edpm-ipam - <nodeSet_name> - ... - <nodeSet_name>
-
Replace
<nodeSet_name>
with the names of theOpenStackDataPlaneNodeSet
CRs that you want to include in your data plane deployment.
-
Replace
-
Save the
openstack-edpm-deploy.yaml
deployment file. Deploy the data plane:
$ oc create -f openstack-edpm-deploy.yaml
You can view the Ansible logs while the deployment executes:
$ oc get pod -l app=openstackansibleee -w $ oc logs -l app=openstackansibleee -f --max-log-requests 10
Verify that the data plane is deployed:
$ oc get openstackdataplanedeployment NAME STATUS MESSAGE openstack-edpm-ipam True Setup Complete $ oc get openstackdataplanenodeset NAME STATUS MESSAGE openstack-edpm-ipam True NodeSet Ready
For information on the meaning of the returned status, see Data plane conditions and states.
If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment.
Map the Compute nodes to the Compute cell that they are connected to:
$ oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose
If you did not create additional cells, this command maps the Compute nodes to
cell1
.Access the remote shell for the
openstackclient
pod and verify that the deployed Compute nodes are visible on the control plane:$ oc rsh -n openstack openstackclient $ openstack hypervisor list
4.6. OpenStackDataPlaneNodeSet
CR properties
The following tables detail the OpenStackDataPlaneNodeSet
CR properties you can configure.
Properties | Description |
---|---|
| Name of the private SSH key secret that contains the private SSH key for connecting to nodes. Secret name format: Secret.data.ssh-privatekey For more information, see Creating an SSH authentication secret.
Default: |
|
Name of the network to use for management (SSH/Ansible). Default: |
|
Network definitions for the |
|
Ansible configuration options. For more information, see |
| The files to mount into an Ansible Execution Pod. |
| UserData configuration. |
|
NetworkData configuration for the |
Properties | Description |
---|---|
|
Ansible configuration options. For more information, see |
| The files to mount into an Ansible Execution Pod. |
| The node name. |
| Name of the network to use for management (SSH/Ansible). |
| NetworkData configuration for the node. |
| Instance networks. |
| Node-specific user data. |
Properties | Description |
---|---|
|
The user associated with the secret you created in Creating the SSH key secrets. Default: |
| SSH host for the Ansible connection. |
| SSH port for the Ansible connection. |
|
The Ansible variables that customize the set of nodes. You can use this property to configure any custom Ansible variable, including the Ansible variables available for each Note
The |
4.7. Example OpenStackDataPlaneNodeSet
CR for pre-provisioned nodes
The following example OpenStackDataPlaneNodeSet
CR creates a set of generic Compute nodes with some node-specific configuration.
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-edpm-ipam spec: env: 1 - name: ANSIBLE_FORCE_COLOR value: "True" preProvisioned: true 2 services: 3 - bootstrap - configure-network - validate-network - install-os - configure-os - run-os - install-certs - ovn - neutron-metadata - libvirt - ssh-known-hosts - nova - telemetry nodes: edpm-compute-0: 4 hostName: edpm-compute-0 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin networks: - name: CtlPlane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 - name: InternalApi subnetName: subnet1 - name: Storage subnetName: subnet1 - name: Tenant subnetName: subnet1 edpm-compute-1: hostName: edpm-compute-1 ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin networks: - name: CtlPlane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: InternalApi subnetName: subnet1 - name: Storage subnetName: subnet1 - name: Tenant subnetName: subnet1 networkAttachments: 5 - ctlplane nodeTemplate: 6 ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret 7 managementNetwork: ctlplane ansible: ansibleUser: cloud-admin 8 ansiblePort: 22 ansibleVars: 9 edpm_bootstrap_release_version_package: "rhoso-release" edpm_neutron_dhcp_image: “registry.redhat.io/rhoso-beta/openstack-neutron-dhcp-agent-rhel9:18.0.0” service_net_map: nova_api_network: internal_api nova_libvirt_network: internal_api timesync_ntp_servers: - hostname: pool.ntp.org # edpm_network_config # Default nic config template for a EDPM compute node # These vars are edpm_network_config role vars edpm_network_config_template: | 10 --- {% set mtu_list = [ctlplane_mtu] %} {% for network in role_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in role_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} edpm_network_config_hide_sensitive_logs: false # These vars are for the network config templates themselves and are # considered EDPM network defaults. neutron_physical_bridge_name: br-ex neutron_public_interface_name: eth0 # edpm_nodes_validation edpm_nodes_validation_validate_controllers_icmp: false edpm_nodes_validation_validate_gateway_icmp: false gather_facts: false enable_debug: false # edpm firewall, change the allowed CIDR if needed edpm_sshd_configure_firewall: true edpm_sshd_allowed_ranges: ['192.168.122.0/24'] # SELinux module edpm_selinux_mode: enforcing
- 1
- Optional: A list of environment variables to pass to the pod.
- 2
- Specify if the nodes in this set are pre-provisioned, or if they are provisioned when creating the resource.
- 3
- The services that are deployed on the data plane nodes in this
OpenStackDataPlaneNodeSet
CR. - 4
- The node definition reference, for example,
edpm-compute-0
. Each node in the node set must have a node definition. - 5
- The networks the
ansibleee-runner
connects to, specified as a list ofnetattach
resource names. - 6
- The common configuration to apply to all nodes in this set of nodes.
- 7
- The name of the secret that you created in Creating the SSH key secrets.
- 8
- The user associated with the secret you created in Creating the SSH key secrets.
- 9
- The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/.
- 10
- The network configuration template to apply to nodes in the set. For sample templates, see https://github.com/openstack-k8s-operators/dataplane-operator/tree/main/config/samples/nic-config-samples.
4.8. Example OpenStackDataPlaneNodeSet
CR for bare-metal nodes
The following example OpenStackDataPlaneNodeSet
CR creates a set of generic Compute nodes with some node-specific configuration.
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-edpm-ipam spec: env: 1 - name: ANSIBLE_FORCE_COLOR value: "True" services: 2 - download-cache - bootstrap - configure-network - validate-network - install-os - configure-os - run-os - install-certs - ovn - neutron-metadata - libvirt - ssh-known-hosts - nova - telemetry baremetalSetTemplate: 3 bmhLabelSelector: app: openstack ctlplaneInterface: enp1s0 cloudUserName: cloud-admin nodes: edpm-compute-0: 4 hostName: edpm-compute-0 networkAttachments: 5 - ctlplane nodeTemplate: 6 ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret 7 networks: 8 - name: CtlPlane subnetName: subnet1 defaultRoute: true - name: InternalApi subnetName: subnet1 - name: Storage subnetName: subnet1 - name: Tenant subnetName: subnet1 managementNetwork: ctlplane ansible: ansibleVars: 9 edpm_bootstrap_release_version_package: "rhoso-release" edpm_neutron_dhcp_image: “registry.redhat.io/rhoso-beta/openstack-neutron-dhcp-agent-rhel9:18.0.0” edpm_network_config_template: | 10 --- {% set mtu_list = [ctlplane_mtu] %} {% for network in role_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in role_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} # These vars are for the network config templates themselves and are # considered EDPM network defaults. neutron_physical_bridge_name: br-ex neutron_public_interface_name: eth0 # edpm_nodes_validation edpm_nodes_validation_validate_controllers_icmp: false edpm_nodes_validation_validate_gateway_icmp: false gather_facts: false enable_debug: false # edpm firewall, change the allowed CIDR if needed edpm_sshd_allowed_ranges: ['192.168.122.0/24']
- 1
- Optional: A list of environment variables to pass to the pod.
- 2
- The services that are deployed on the data plane nodes in this
OpenStackDataPlaneNodeSet
CR. - 3
- Configure the bare-metal template for bare-metal nodes that must be provisioned when creating the resource.
- 4
- The node definition reference, for example,
edpm-compute-0
. Each node in the node set must have a node definition. - 5
- The networks the
ansibleee-runner
connects to, specified as a list ofnetattach
resource names. - 6
- The common configuration to apply to all nodes in this set of nodes.
- 7
- The name of the secret that you created in Creating the SSH key secrets.
- 8
- Networks for the bare-metal nodes.
- 9
- The Ansible variables that customize the set of nodes. For a list of Ansible variables that you can use, see https://openstack-k8s-operators.github.io/edpm-ansible/.
- 10
- The network configuration template to apply to nodes in the set. For sample templates, see https://github.com/openstack-k8s-operators/edpm-ansible/tree/main/roles/edpm_network_config/templates.
4.9. Data plane conditions and states
Each data plane resource has a series of conditions within their status
subresource that indicates the overall state of the resource, including its deployment progress.
For an OpenStackDataPlaneNodeSet
, until an OpenStackDataPlaneDeployment
has been started and finished successfully, the Ready
condition is False
. When the deployment succeeds, the Ready
condition is set to True
. A subsequent deployment sets the Ready
condition to False
until the deployment succeeds, when the Ready
condition is set to True
.
Condition | Description |
---|---|
|
|
| "True": All setup tasks for a resource are complete. Setup tasks include verifying the SSH key secret, verifying other fields on the resource, and creating the Ansible inventory for each resource. Each service-specific condition is set to "True" when that service completes deployment. You can check the service conditions to see which services have completed their deployment, or which services failed. |
| "True": The NodeSet has been successfully deployed. |
| "True": The required inputs are available and ready. |
| "True": DNSData resources are ready. |
| "True": The IPSet resources are ready. |
| "True": Bare-metal nodes are provisioned and ready. |
Status field | Description |
---|---|
|
|
| |
|
Condition | Description |
---|---|
|
|
| "True": The data plane is successfully deployed. |
| "True": The required inputs are available and ready. |
|
"True": The deployment has succeeded for the named |
|
"True": The deployment has succeeded for the named |
Status field | Description |
---|---|
|
|
Condition | Description |
---|---|
| "True": The service has been created and is ready for use. "False": The service has failed to be created. |
4.10. Provisioning bare-metal data plane nodes
Provisioning bare-metal nodes on the data plane is supported with the Red Hat OpenShift Container Platform (RHOCP) Cluster Baremetal Operator (CBO). The CBO deploys the components required to provision bare-metal nodes within the RHOCP cluster, including the Bare Metal Operator (BMO) and Ironic containers.
The BMO manages the available hosts on clusters and performs the following operations:
-
Inspects node hardware details and reports them to the corresponding
BareMetalHost
CR. This includes information about CPUs, RAM, disks, and NICs. - Provisions nodes with a specific image.
- Cleans node disk contents before and after provisioning.
The availability of the CBO depends on which of the following installation methods was used for the RHOCP cluster:
- Assisted Installer
- You can enable CBO on clusters installed with the Assisted Installer, and you can manually add the provisioning network to the Assisted Installer cluster after installation.
- Installer-provisioned infrastructure
- CBO is enabled by default on RHOCP clusters that are installed with the bare-metal installer-provisioned infrastructure. You can configure installer-provisioned clusters with a provisioning network to enable both virtual media and network boot installations. Alternatively, you can configure an installer-provisioned cluster without a provisioning network so that only virtual media provisioning is available. For more information about installer-provisioned clusters on bare metal, see Deploying installer-provisioned clusters on bare metal.
- User-provisioned infrastructure
- You can activate CBO on RHOCP clusters installed with user-provisioned infrastructure by creating a Provisioning CR. You cannot add a provisioning network to a user-provisioned cluster. For more information about how to create a Provisioning CR, see Scaling a user-provisioned cluster with the Bare Metal Operator.
4.11. Troubleshooting data plane creation and deployment
Each data plane deployment in the environment has associated services. Each of these services have a job condition message that matches to the current status of the AnsibleEE job executing for that service. This information can be used to troubleshoot deployments when services are not deploying or operating correctly.
Procedure
Determine the name and status of all deployments:
$ oc get openstackdataplanedeployment
The following example output shows two deployments currently in progress:
$ oc get openstackdataplanedeployment NAME NODESETS STATUS MESSAGE openstack-edpm-ipam1 ["openstack-edpm"] False Deployment in progress openstack-edpm-ipam2 ["openstack-edpm"] False Deployment in progress
Determine the name and status of all services and their job condition:
$ oc get openstackansibleee
The following example output shows all services and their job condition for all current deployments:
$ oc get openstackansibleee NAME NETWORKATTACHMENTS STATUS MESSAGE bootstrap-openstack-edpm ["ctlplane"] True AnsibleExecutionJob complete download-cache-openstack-edpm ["ctlplane"] False AnsibleExecutionJob is running repo-setup-openstack-edpm ["ctlplane"] True AnsibleExecutionJob complete validate-network-another-osdpd ["ctlplane"] False AnsibleExecutionJob is running
Filter for the name and service for a specific deployment:
$ oc get openstackansibleee -l osdpd=<deployment_name>
Replace
<deployment_name>
with the name of the deployment to use to filter the services list.The following example filters the list to only show services and their job condition for the
openstack-edpm-ipam1
deployment:$ oc get openstackansibleee -l osdpd=openstack-edpm-ipam1 NAME NETWORKATTACHMENTS STATUS MESSAGE bootstrap-openstack-edpm ["ctlplane"] True AnsibleExecutionJob complete download-cache-openstack-edpm ["ctlplane"] False AnsibleExecutionJob is running repo-setup-openstack-edpm ["ctlplane"] True AnsibleExecutionJob complete
Job Condition Messages
AnsibleEE jobs have an associated condition message that indicates the current state of the service job. This condition message is displayed in the MESSAGE
field of the oc get openstackansibleee
command output. Jobs return one of the following conditions when queried:
-
AnsibleExecutionJob not started
: The job has not started. -
AnsibleExecutionJob not found
: The job could not be found. -
AnsibleExecutionJob is running
: The job is currently running. -
AnsibleExecutionJob complete
: The job execution is complete. -
AnsibleExecutionJob error occured <error_message>
: The job stopped executed unexpectedly. The<error_message>
is replaced with a specific error message.
To further investigate a service displaying a particular job condition message, use the command oc logs job/<service>
to display the logs associated with that service. For example, to display the logs for the repo-setup-openstack-edpm
service, use the command oc logs job/repo-setup-openstack-edpm
.