Chapter 4. Customizing the data plane
The Red Hat OpenStack Services on OpenShift (RHOSO) data plane consists of RHEL 9.4 nodes. You use the OpenStackDataPlaneNodeSet custom resource definition (CRD) to create the custom resources (CRs) that define the nodes and the layout of the data plane. You can use pre-provisioned nodes, or provision bare-metal nodes as part of the data plane creation and deployment process.
You can add additional node sets to your data plane by using the procedures in Creating the data plane in the Deploying Red Hat OpenStack Services on OpenShift guide.
You can also modify existing OpenStackDataPlaneNodeSet CRs, add Compute cells to your data plane, and customize your data plane by creating custom services.
To prevent database corruption, do not edit the name of any cell in the custom resource definition (CRD).
4.1. Prerequisites Copy linkLink copied to clipboard!
- The RHOSO environment is deployed on a Red Hat OpenShift Container Platform (RHOCP) cluster. For more information, see Deploying Red Hat OpenStack Services on OpenShift.
-
You are logged on to a workstation that has access to the RHOCP cluster as a user with
cluster-adminprivileges.
Red Hat OpenStack Services on OpenShift (RHOSO) supports external deployments of Red Hat Ceph Storage 7 and 8. Configuration examples that reference Red Hat Ceph Storage use Release 7 information. If you are using Red Hat Ceph Storage 8, adjust the configuration examples accordingly.
4.2. Modifying an OpenStackDataPlaneNodeSet CR Copy linkLink copied to clipboard!
You can modify an existing OpenStackDataPlaneNodeSet custom resource (CR), for example, to add a new node or update node configuration. You can include each node in only one OpenStackDataPlaneNodeSet CR, and you can connect each node set to only one Compute cell. By default, node sets are connected to cell1. If your control plane includes additional Compute cells, you must specify the cell to which the node set is connected.
To apply the OpenStackDataPlaneNodeSet CR modifications to the data plane, you create an OpenStackDataPlaneDeployment CR that deploys the modified OpenStackDataPlaneNodeSet CR.
When the OpenStackDataPlaneDeployment successfully completes execution, it does not automatically execute the Ansible again, even if the OpenStackDataPlaneDeployment or related OpenStackDataPlaneNodeSet resources are changed. To start another Ansible execution, you must create another OpenStackDataPlaneDeployment CR. Remove any failed OpenStackDataPlaneDeployment CRs in your environment before creating a new one to allow the new OpenStackDataPlaneDeployment to run Ansible with an updated Secret.
Procedure
-
Open the
OpenStackDataPlaneNodeSetCR definition file for the node set you want to update, for example,openstack_data_plane.yaml. -
Update or add the configuration you require. For information about the properties you can use to configure common node attributes, see
OpenStackDataPlaneNodeSetCR spec properties in the Deploying Red Hat OpenStack Services on OpenShift guide. -
Save the
OpenStackDataPlaneNodeSetCR definition file. Apply the updated
OpenStackDataPlaneNodeSetCR configuration:oc apply -f openstack_data_plane.yaml
$ oc apply -f openstack_data_plane.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the data plane resource has been updated by confirming that the status is
SetupReady:oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m
$ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10mCopy to Clipboard Copied! Toggle word wrap Toggle overflow When the status is
SetupReadythe command returns acondition metmessage, otherwise it returns a timeout error.For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.
-
If there are any failed
OpenStackDataPlaneDeploymentCRs in your environment, remove them to allow a newOpenStackDataPlaneDeploymentto run Ansible with an updated Secret. Create a file on your workstation to define the new
OpenStackDataPlaneDeploymentCR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: <node_set_deployment_name>
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: <node_set_deployment_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<node_set_deployment_name>with the name of theOpenStackDataPlaneDeploymentCR. The name must be unique, must consist of lower case alphanumeric characters,-(hyphen) or.(period), and must start and end with an alphanumeric character.
TipGive the definition file and the
OpenStackDataPlaneDeploymentCR unique and descriptive names that indicate the purpose of the modified node set.-
Replace
Add the
OpenStackDataPlaneNodeSetCR that you modified:spec: nodeSets: - <nodeSet_name>spec: nodeSets: - <nodeSet_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the
OpenStackDataPlaneDeploymentCR deployment file. Deploy the modified
OpenStackDataPlaneNodeSetCR:oc create -f openstack_data_plane_deploy.yaml -n openstack
$ oc create -f openstack_data_plane_deploy.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can view the Ansible logs while the deployment executes:
oc get pod -l app=openstackansibleee -w oc logs -l app=openstackansibleee -f --max-log-requests 10
$ oc get pod -l app=openstackansibleee -w $ oc logs -l app=openstackansibleee -f --max-log-requests 10Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the
oc logscommand returns an error similar to the following error, increase the--max-log-requestsvalue:error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limitCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the modified
OpenStackDataPlaneNodeSetCR is deployed:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For information about the meaning of the returned status, see Data plane conditions and states in the Deploying Red Hat OpenStack Services on OpenShift guide.
If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment in the Deploying Red Hat OpenStack Services on OpenShift guide.
If you added a new node to the node set, then map the node to the Compute cell it is connected to:
oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose
$ oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verboseCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you did not create additional cells, this command maps the Compute nodes to
cell1.Access the remote shell for the
openstackclientpod and verify that the deployed Compute nodes are visible on the control plane:oc rsh -n openstack openstackclient openstack hypervisor list
$ oc rsh -n openstack openstackclient $ openstack hypervisor listCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3. Data plane services Copy linkLink copied to clipboard!
A data plane service is an Ansible execution that manages the installation, configuration, and execution of a software deployment on data plane nodes. Each service is a resource instance of the OpenStackDataPlaneService custom resource definition (CRD), which combines Ansible content and configuration data from ConfigMap and Secret CRs. You specify the Ansible execution for your service with Ansible play content, which can be an Ansible playbook from edpm-ansible, or any Ansible play content. The ConfigMap and Secret CRs can contain any configuration data that needs to be consumed by the Ansible content.
The OpenStack Operator provides core services that are deployed by default on data plane nodes. If you omit the services field from the OpenStackDataPlaneNodeSet specification, then the following services are applied by default in the following order:
The OpenStack Operator also includes the following services that are not enabled by default:
| Service | Description |
|---|---|
|
|
Include this service to configure data plane nodes as clients of a Red Hat Ceph Storage server. Include between the |
|
| Include this service to prepare data plane nodes to host Red Hat Ceph Storage in a HCI configuration. For more information, see Deploying a Hyperconverged Infrastructure environment. |
|
| Include this service to run a Neutron DHCP agent on the data plane nodes. |
|
| Include this service to run the Neutron OVN agent on the data plane nodes. This agent is required to provide QoS to hardware offloaded ports on the Compute nodes. |
|
| Include this service to run a Neutron SR-IOV NIC agent on the data plane nodes. |
|
| Include this service to gather metrics for power consumption on the data plane nodes. Important This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details. |
For more information about the available default services, see https://github.com/openstack-k8s-operators/openstack-operator/tree/main/config/services.
You can enable and disable services for an OpenStackDataPlaneNodeSet resource.
Do not change the order of the default service deployments.
You can use the OpenStackDataPlaneService CRD to create a custom service that you can deploy on your data plane nodes. You add your custom service to the default list of services where the service must be executed. For more information, see Creating and enabling a custom service.
You can view the details of a service by viewing the YAML representation of the resource:
oc get openstackdataplaneservice configure-network -o yaml -n openstack
$ oc get openstackdataplaneservice configure-network -o yaml -n openstack
4.3.1. Creating and enabling a custom service Copy linkLink copied to clipboard!
You can use the OpenStackDataPlaneService CRD to create custom services to deploy on your data plane nodes.
Do not create a custom service with the same name as one of the default services. If a custom service name matches a default service name, the default service values overwrite the custom service values during OpenStackDataPlaneNodeSet reconciliation.
You specify the Ansible execution for your service with either an Ansible playbook or by including the free-form playbook contents directly in the playbookContents section of the service.
You cannot include an Ansible playbook and playbookContents in the same service.
Procedure
Create an
OpenStackDataPlaneServiceCR and save it to a YAML file on your workstation, for examplecustom-service.yaml:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: custom-service spec:
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: custom-service spec:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the Ansible commands to create the custom service, by referencing an Ansible playbook or by including the Ansible play in the
playbookContentsfield:Specify the Ansible playbook to use:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the Ansible play in the
playbookContentsfield as a string that uses Ansible playbook syntax:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For information about how to create an Ansible playbook, see Creating a playbook.
Specify the
edpmServiceTypefield for the service. You can have different custom services that use the same Ansible content to manage the same data plane service, for example,ovnornova. TheDataSources, TLS certificates, and CA certificates must be mounted at the same locations so that Ansible content can locate them and re-use the same paths for a custom service. You use theedpmServiceTypefield to create this association. The value is the name of the default service that uses the same Ansible content as the custom service. For example, if you have a custom service that uses theedpm_ovnAnsible content fromedpm-ansible, you setedpmServiceTypetoovn, which matches the defaultovnservice name provided by the OpenStack Operator.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe acroynm
edpmused in field names stands for "External Data Plane Management".-
Optional: To override the default container image used by the
ansible-runnerexecution environment with a custom image that uses additional Ansible content for a custom service, build and include a customansible-runnerimage. For information, see Building a customansible-runnerimage. Optional: Specify the names of
SecretorConfigMapresources to use to pass secrets or configurations into theOpenStackAnsibleEEjob:Copy to Clipboard Copied! Toggle word wrap Toggle overflow datasources.secretRef.optional: An optional field that, when set to "true", marks the resource as optional so that an error is not thrown if it doesn’t exist.A mount is created for each
SecretandConfigMapCR in theOpenStackAnsibleEEpod with a filename that matches the resource value. The mounts are created under/var/lib/openstack/configs/<service name>. You can then use Ansible content to access the configuration or secret data.
Optional: Set the
deployOnAllNodeSetsfield to true if the service must run on all node sets in theOpenStackDataPlaneDeploymentCR, even if the service is not listed as a service in every node set in the deployment:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the custom service:
oc apply -f custom-service.yaml -n openstack
$ oc apply -f custom-service.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the custom service is created:
oc get openstackdataplaneservice <custom_service_name> -o yaml -n openstack
$ oc get openstackdataplaneservice <custom_service_name> -o yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the custom service to the
servicesfield in the definition file for the node sets the service applies to. Add the service name in the order that it should be executed relative to the other services. If thedeployAllNodeSetsfield is set totrue, then you need to add the service to only one of the node sets in the deployment.NoteWhen adding your custom service to the services list in a node set definition, you must include all the required services, including the default services. If you include only your custom service in the services list, then that is the only service that is deployed.
4.3.2. Building a custom ansible-runner image Copy linkLink copied to clipboard!
You can override the default container image used by the ansible-runner execution environment with your own custom image when you need additional Ansible content for a custom service.
Procedure
Create a
Containerfilethat adds the custom content to the default image:FROM quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest COPY my_custom_role /usr/share/ansible/roles/my_custom_role
FROM quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest COPY my_custom_role /usr/share/ansible/roles/my_custom_roleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Build and push the image to a container registry:
podman build -t quay.io/example_user/my_custom_image:latest . podman push quay.io/example_user/my_custom_role:latest
$ podman build -t quay.io/example_user/my_custom_image:latest . $ podman push quay.io/example_user/my_custom_role:latestCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify your new container image as the image that the
ansible-runnerexecution environment must use to add the additional Ansible content that your custom service requires, such as Ansible roles or modules:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
openstack-ansibleee-runner: Your container image that theansible-runnerexecution environment uses to execute Ansible.
-
4.4. Configuring a node set for a feature or workload Copy linkLink copied to clipboard!
You can designate a node set for a particular feature or workload. To designate and configure a node set for a feature or workload, complete the following tasks:
-
Create the
ConfigMapcustom resources (CRs) to configure the nodes for the feature. - Create a custom service for the node set that runs the playbook for the service.
-
Include the
ConfigMapCRs in the custom service.
The Compute service (nova) provides a default ConfigMap CR named nova-extra-config, where you can add generic configuration that applies to all the node sets that use the default nova service. If you use this default nova-extra-config ConfigMap to add generic configuration to be applied to all the node sets, then you do not need to create a custom service.
Procedure
Create a
ConfigMapCR that defines a new configuration file for the feature:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you are using the default
ConfigMapCR for the Compute service namednova-extra-configor any other ConfigMap or Secret intended to pass configuration options to thenova-computeservice on the EDPM node, you must configure the target configuration filename to matchnova.conf, for example,<integer>-nova-<feature>.conf. For more information, see Configuring the Compute service (nova) in Configuring the Compute service for instance creation.Replace
<integer>with a number that indicates when to apply the configuration. The control plane services apply every file in their service directory,/etc/<service>/<service>.conf.d/, in lexicographical order. Therefore, configurations defined in later files override the same configurations defined in an earlier file. Each service operator generates the default configuration file with the name01-<service>.conf. For example, the default configuration file for thenova-operatoris01-nova.conf.NoteNumbers below 25 are reserved for the OpenStack services and Ansible configuration files.
Replace
<feature>with a string that indicates the feature being configured.NoteDo not use the name of the default configuration file, because it would override the infrastructure configuration, such as the
transport_url.-
Replace
<[config_grouping]>with the name of the group the configuration options belong to in the service configuration file. For example,[compute]ordatabase. -
Replace
<config_option>with the option you want to configure, for example,cpu_shared_set. Replace
<value>with the value for the configuration option, for example,2,6.When the service is deployed, it adds the configuration to the
etc/<service>/<service>.conf.d/directory in the service container. For example, for a Compute feature, the configuration file is added toetc/nova/nova.conf.d/in thenova_computecontainer.For more information on creating
ConfigMapobjects, see Creating and using config maps in the RHOCP Nodes guide.
TipYou can use a
Secretto create the custom configuration instead if the configuration includes sensitive information, such as passwords or certificates that are required for certification.- Create a custom service for the node set. For information about how to create a custom service, see Creating and enabling a custom service.
Add the
ConfigMapCR to the custom service:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the
SecretCR for the cell that the node set that runs this service connects to:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5. Connecting an OpenStackDataPlaneNodeSet CR to a Compute cell Copy linkLink copied to clipboard!
Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1. If you added additional Compute cells to your control plane, you must specify to which cell the node set connects.
Procedure
Create a custom
novaservice that includes theSecretcustom resource (CR) for the cell to connect to:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<nova_cell_custom>with a name for the custom service, for example,nova-cell1-custom. -
Replace
<cell_secret_ref>with theSecretCR generated by the control plane for the cell, for example,nova-cell1-compute-config.
For information about how to create a custom service, see Creating and enabling a custom service.
-
Replace
If you configured each cell with a dedicated
novametadata API service, create a customneutron-metadataservice for each cell that includes theSecretCR for connecting to the cell:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<neutron_cell_metadata_custom>with a name for the custom service, for example,neutron-cell1-metadata-custom. -
Replace
<cell_metadata_secret_ref>with theSecretCR generated by the control plane for the cell, for example,nova-cell1-metadata-neutron-config.
-
Replace
-
Open the
OpenStackDataPlaneNodeSetCR file for the cell node set, for example,openstack_cell1_node_set.yaml. Replace the
novaservice in yourOpenStackDataPlaneNodeSetCR with your customnovaservice for the cell:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteDo not change the order of the default services.
If you created a custom
neutron-metadataservice, add it to the list of services or replace theneutron-metadataservice with your custom service for the cell:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Complete the configuration of your
OpenStackDataPlaneNodeSetCR. For more information, see Creating the data plane. -
Save the
OpenStackDataPlaneNodeSetCR definition file. Create the data plane resources:
oc create -f openstack_cell1_node_set.yaml
$ oc create -f openstack_cell1_node_set.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the data plane resources have been created by confirming that the status is
SetupReady:oc wait openstackdataplanenodeset openstack-cell1 --for condition=SetupReady --timeout=10m
$ oc wait openstackdataplanenodeset openstack-cell1 --for condition=SetupReady --timeout=10mCopy to Clipboard Copied! Toggle word wrap Toggle overflow When the status is
SetupReadythe command returns acondition metmessage, otherwise it returns a timeout error.For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.
Verify that the
Secretresource was created for the node set:oc get secret | grep openstack-cell1
$ oc get secret | grep openstack-cell1 openstack_cell1_node_set Opaque 1 3m50sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the services were created:
oc get openstackdataplaneservice -n openstack | grep nova-cell1-custom
$ oc get openstackdataplaneservice -n openstack | grep nova-cell1-customCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create an
OpenStackDataPlaneDeploymentCR to deploy theOpenStackDataPlaneNodeSetCR. For more information, see Deploying the data plane in the Deploying Red Hat OpenStack Services on OpenShift guide.
4.6. Limiting the ports available to the OpenStackProvisionServer CR Copy linkLink copied to clipboard!
The OpenStackProvisionServer custom resource (CR) is automatically created by default during the installation and deployment of your Red Hat OpenStack on OpenShift (RHOSO) environment. By default, the OpenStackProvisionServer CR uses the port range 6190-6220. You can create a custom OpenStackProvisionServer CR to limit the ports that must be opened.
Procedure
Create a file on your workstation to define the
OpenStackProvisionServerCR, for example,my_os_provision_server.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
port: Specifies the port that you want to open. Must be in theOpenStackProvisionServerCR range: 6190 - 6220.
-
Create the
OpenStackProvisionServerCR:oc create -f my_os_provision_server.yaml -n openstack
$ oc create -f my_os_provision_server.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7. Registering third-party nodes with the DNS server Copy linkLink copied to clipboard!
The Red Hat OpenStack Services on OpenShift (RHOSO) DNS server is configured only for data plane nodes. If the data plane nodes must resolve third-party nodes that cannot be resolved by the upstream DNS server that the dnsmasq service is configured to forward requests to, then you can register the third-party nodes with the same DNS instance that the data plane nodes are configured with.
To register third-party nodes, you create DNSData custom resources (CRs). Creating a DNSData CR updates the DNS configuration and restarts the dnsmasq pods that can then read and resolve the DNS information in the associated DNSData CR.
All nodes must be able to resolve the hostnames of the Red Hat OpenShift Container Platform (RHOCP) pods, for example, by using the external IP of the dnsmasq service.
Procedure
Create a file on your workstation named
host_dns_data.yaml to define the `DNSDataCR:apiVersion: network.openstack.org/v1beta1 kind: DNSData metadata: name: my-dnsdata namespace: openstack
apiVersion: network.openstack.org/v1beta1 kind: DNSData metadata: name: my-dnsdata namespace: openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Define the hostnames and IP addresses of each host:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
hosts.hostnames: Lists the hostnames that can be used to access the third-party node. -
hosts.ip: Defines the IP address of the third-party node to which the hostname resolves.
-
Create the
DNSDataCR:oc apply -f host_dns_data.yaml -n openstack
$ oc apply -f host_dns_data.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow