Chapter 4. Customizing the data plane
You can customize your data plane by adding additional node sets, modifying existing OpenStackDataPlaneNodeSet CRs, adding Compute cells, and creating custom services.
To add additional node sets to your data plane, use the procedures in Creating the data plane in the Deploying Red Hat OpenStack Services on OpenShift guide.
To prevent database corruption, do not edit the name of any cell in the custom resource definition (CRD).
4.1. Prerequisites Copy linkLink copied to clipboard!
- The RHOSO environment is deployed on a Red Hat OpenShift Container Platform (RHOCP) cluster. For more information, see Deploying Red Hat OpenStack Services on OpenShift.
-
You are logged on to a workstation that has access to the RHOCP cluster as a user with
cluster-adminprivileges.
Red Hat OpenStack Services on OpenShift (RHOSO) supports external deployments of Red Hat Ceph Storage 7, 8, and 9. Configuration examples that reference Red Hat Ceph Storage use Release 7 information. If you are using a later version of Red Hat Ceph Storage, adjust the configuration examples accordingly.
4.2. Modifying an OpenStackDataPlaneNodeSet CR Copy linkLink copied to clipboard!
Modify an existing OpenStackDataPlaneNodeSet custom resource (CR) to update node configurations or add new nodes to your data plane. Use a new OpenStackDataPlaneDeployment CR to apply these modifications to the data plane.
You must create a new OpenStackDataPlaneDeployment CR to start another Ansible execution that applies any changes you made to the data plane. This is because when an OpenStackDataPlaneDeployment CR successfully completes execution, it does not automatically execute the Ansible again, even if the OpenStackDataPlaneDeployment CR or related OpenStackDataPlaneNodeSet resources are changed.
Procedure
-
Open the
OpenStackDataPlaneNodeSetCR definition file for the node set you want to update, for example,openstack_data_plane.yaml. -
Update or add the configuration you require. For information about the properties you can use to configure common node attributes, see
OpenStackDataPlaneNodeSetCR spec properties in the Deploying Red Hat OpenStack Services on OpenShift guide. -
Save the
OpenStackDataPlaneNodeSetCR definition file. Apply the updated
OpenStackDataPlaneNodeSetCR configuration:$ oc apply -f openstack_data_plane.yamlVerify that the data plane resource has been updated by confirming that the status is
SetupReady:$ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10mWhen the status is
SetupReadythe command returns acondition metmessage, otherwise it returns a timeout error.For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.
-
If there are any failed
OpenStackDataPlaneDeploymentCRs in your environment, remove them to allow a newOpenStackDataPlaneDeploymentto run Ansible with an updated Secret. Create a file on your workstation to define the new
OpenStackDataPlaneDeploymentCR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: <node_set_deployment_name>-
Replace
<node_set_deployment_name>with the name of theOpenStackDataPlaneDeploymentCR. The name must be unique, must consist of lower case alphanumeric characters,-(hyphen) or.(period), and must start and end with an alphanumeric character.
TipGive the definition file and the
OpenStackDataPlaneDeploymentCR unique and descriptive names that indicate the purpose of the modified node set.-
Replace
Add the
OpenStackDataPlaneNodeSetCR that you modified:spec: nodeSets: - <nodeSet_name>-
Save the
OpenStackDataPlaneDeploymentCR deployment file. Deploy the modified
OpenStackDataPlaneNodeSetCR:$ oc create -f openstack_data_plane_deploy.yaml -n openstackYou can view the Ansible logs while the deployment executes:
$ oc get pod -l app=openstackansibleee -w $ oc logs -l app=openstackansibleee -f --max-log-requests 10If the
oc logscommand returns an error similar to the following error, increase the--max-log-requestsvalue:error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limitVerify that the modified
OpenStackDataPlaneNodeSetCR is deployed:$ oc get openstackdataplanedeployment -n openstack NAME STATUS MESSAGE openstack-data-plane True Setup Complete $ oc get openstackdataplanenodeset -n openstack NAME STATUS MESSAGE openstack-data-plane True NodeSet ReadyFor information about the meaning of the returned status, see Data plane conditions and states in the Deploying Red Hat OpenStack Services on OpenShift guide.
If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment in the Deploying Red Hat OpenStack Services on OpenShift guide.
If you added a new node to the node set, then map the node to the Compute cell it is connected to:
$ oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verboseIf you did not create additional cells, this command maps the Compute nodes to
cell1.Access the remote shell for the
openstackclientpod and verify that the deployed Compute nodes are visible on the control plane:$ oc rsh -n openstack openstackclient $ openstack hypervisor list
4.3. Data plane services Copy linkLink copied to clipboard!
A data plane service is an Ansible execution that manages the installation, configuration, and execution of a software deployment on data plane nodes. Each service is a resource instance of the OpenStackDataPlaneService custom resource definition (CRD), which combines Ansible content and configuration data from ConfigMap and Secret CRs. You specify the Ansible execution for your service with Ansible play content, which can be an Ansible playbook from edpm-ansible, or any Ansible play content. The ConfigMap and Secret CRs can contain any configuration data that needs to be consumed by the Ansible content.
The OpenStack Operator provides core services that are deployed by default on data plane nodes. If you omit the services field from the OpenStackDataPlaneNodeSet specification, then the following services are applied by default in the following order:
services:
- redhat
- bootstrap
- download-cache
- configure-network
- validate-network
- install-os
- configure-os
- ssh-known-hosts
- run-os
- reboot-os
- install-certs
- ovn
- neutron-metadata
- libvirt
- nova
- telemetry
The OpenStack Operator also includes the following services that are not enabled by default:
| Service | Description |
|---|---|
|
|
Include this service to configure data plane nodes as clients of a Red Hat Ceph Storage server. Include between the
|
|
| Include this service to prepare data plane nodes to host Red Hat Ceph Storage in a HCI configuration. For more information, see Deploying a hyperconverged infrastructure environment. |
|
| Include this service to run a Neutron DHCP agent on the data plane nodes. |
|
| Include this service to run the Neutron OVN agent on the data plane nodes. This agent is required to provide QoS to hardware offloaded ports on the Compute nodes. |
|
| Include this service to run a Neutron SR-IOV NIC agent on the data plane nodes. |
|
| Include this service to gather metrics for power consumption on the data plane nodes. Important This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details. |
For more information about the available default services, see https://github.com/openstack-k8s-operators/openstack-operator/tree/main/config/services.
You can enable and disable services for an OpenStackDataPlaneNodeSet resource.
Do not change the order of the default service deployments.
You can use the OpenStackDataPlaneService CRD to create a custom service that you can deploy on your data plane nodes. You add your custom service to the default list of services where the service must be executed. For more information, see Creating and enabling a custom service.
You can view the details of a service by viewing the YAML representation of the resource:
$ oc get openstackdataplaneservice configure-network -o yaml -n openstack
4.3.1. Creating and enabling a custom service Copy linkLink copied to clipboard!
Create custom data plane services by using the OpenStackDataPlaneService custom resource definition (CRD) to deploy additional or non-default software configurations or Ansible content on your data plane nodes.
Do not create a custom service with the same name as one of the default services. If a custom service name matches a default service name, the default service values overwrite the custom service values during OpenStackDataPlaneNodeSet reconciliation.
You specify the Ansible execution for your service with either an Ansible playbook or by including the free-form playbook contents directly in the playbookContents section of the service.
You cannot include an Ansible playbook and playbookContents in the same service.
Procedure
Create an
OpenStackDataPlaneServiceCR and save it to a YAML file on your workstation, for examplecustom-service.yaml:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: custom-service spec:Specify the Ansible commands to create the custom service, by referencing an Ansible playbook or by including the Ansible play in the
playbookContentsfield:Specify the Ansible playbook to use:
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: custom-service spec: playbook: osp.edpm.configure_osSpecify the Ansible play in the
playbookContentsfield as a string that uses Ansible playbook syntax:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: custom-service spec: playbookContents: | - hosts: all tasks: - name: Hello World! shell: "echo Hello World!" register: output - name: Show output debug: msg: "{{ output.stdout }}" - name: Hello World role import_role: hello_worldFor information about how to create an Ansible playbook, see Creating a playbook.
Specify the
edpmServiceTypefield for the service. You can have different custom services that use the same Ansible content to manage the same data plane service, for example,ovnornova. TheDataSources, TLS certificates, and CA certificates must be mounted at the same locations so that Ansible content can locate them and re-use the same paths for a custom service. You use theedpmServiceTypefield to create this association. The value is the name of the default service that uses the same Ansible content as the custom service. For example, if you have a custom service that uses theedpm_ovnAnsible content fromedpm-ansible, you setedpmServiceTypetoovn, which matches the defaultovnservice name provided by the OpenStack Operator.apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: custom-service spec: ... edpmServiceType: ovnNoteThe acroynm
edpmused in field names stands for "External Data Plane Management".-
Optional: To override the default container image used by the
ansible-runnerexecution environment with a custom image that uses additional Ansible content for a custom service, build and include a customansible-runnerimage. For information, see Building a customansible-runnerimage. Optional: Specify the names of
SecretorConfigMapresources to use to pass secrets or configurations into theOpenStackAnsibleEEjob:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: custom-service spec: ... playbookContents: | ... dataSources: - configMapRef: name: hello-world-cm-0 - secretRef: name: hello-world-secret-0 - secretRef: name: hello-world-secret-1 optional: truedatasources.secretRef.optional: An optional field that, when set to "true", marks the resource as optional so that an error is not thrown if it doesn’t exist.A mount is created for each
SecretandConfigMapCR in theOpenStackAnsibleEEpod with a filename that matches the resource value. The mounts are created under/var/lib/openstack/configs/<service name>. You can then use Ansible content to access the configuration or secret data.
Optional: Set the
deployOnAllNodeSetsfield to true if the service must run on all node sets in theOpenStackDataPlaneDeploymentCR, even if the service is not listed as a service in every node set in the deployment:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: custom-service spec: playbookContents: | ... deployOnAllNodeSets: trueCreate the custom service:
$ oc apply -f custom-service.yaml -n openstackVerify that the custom service is created:
$ oc get openstackdataplaneservice <custom_service_name> -o yaml -n openstackAdd the custom service to the
servicesfield in the definition file for the node sets the service applies to. Add the service name in the order that it should be executed relative to the other services. If thedeployAllNodeSetsfield is set totrue, then you need to add the service to only one of the node sets in the deployment.NoteWhen adding your custom service to the services list in a node set definition, you must include all the required services, including the default services. If you include only your custom service in the services list, then that is the only service that is deployed.
4.3.2. Building a custom ansible-runner image Copy linkLink copied to clipboard!
You can override the default container image used by the ansible-runner execution environment with your own custom image when you need additional Ansible content for a custom service.
Procedure
Create a
Containerfilethat adds the custom content to the default image:FROM quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest COPY my_custom_role /usr/share/ansible/roles/my_custom_roleBuild and push the image to a container registry:
$ podman build -t quay.io/example_user/my_custom_image:latest . $ podman push quay.io/example_user/my_custom_role:latestSpecify your new container image as the image that the
ansible-runnerexecution environment must use to add the additional Ansible content that your custom service requires, such as Ansible roles or modules:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: custom-service spec: label: dataplane-deployment-custom-service openStackAnsibleEERunnerImage: quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest playbookContents: |-
openstack-ansibleee-runner: Your container image that theansible-runnerexecution environment uses to execute Ansible.
-
4.4. Configuring a node set for a feature or workload Copy linkLink copied to clipboard!
You can configure a node set to designate it for a particular feature or workload.
The Compute service (nova) provides a default ConfigMap CR named nova-extra-config, where you can add generic configuration that applies to all the node sets that use the default nova service. If you use this default nova-extra-config ConfigMap to add generic configuration to be applied to all the node sets, then you do not need to create a custom service.
Procedure
Create a
ConfigMapCR that defines a new configuration file for the feature:apiVersion: v1 kind: ConfigMap metadata: name: feature-configmap namespace: openstack data: <integer>-<feature>.conf: | <[config_grouping]> <config_option> = <value> <config_option> = <value>NoteIf you are using the default
ConfigMapCR for the Compute service namednova-extra-configor any other ConfigMap or Secret intended to pass configuration options to thenova-computeservice on the EDPM node, you must configure the target configuration filename to matchnova.conf, for example,<integer>-nova-<feature>.conf. For more information, see Configuring the Compute service (nova) in Configuring the Compute service for instance creation.Replace
<integer>with a number that indicates when to apply the configuration. The control plane services apply every file in their service directory,/etc/<service>/<service>.conf.d/, in lexicographical order. Therefore, configurations defined in later files override the same configurations defined in an earlier file. Each service operator generates the default configuration file with the name01-<service>.conf. For example, the default configuration file for thenova-operatoris01-nova.conf.NoteNumbers below 25 are reserved for the OpenStack services and Ansible configuration files.
Replace
<feature>with a string that indicates the feature being configured.NoteDo not use the name of the default configuration file, because it would override the infrastructure configuration, such as the
transport_url.-
Replace
<[config_grouping]>with the name of the group the configuration options belong to in the service configuration file. For example,[compute]ordatabase. -
Replace
<config_option>with the option you want to configure, for example,cpu_shared_set. Replace
<value>with the value for the configuration option, for example,2,6.When the service is deployed, it adds the configuration to the
etc/<service>/<service>.conf.d/directory in the service container. For example, for a Compute feature, the configuration file is added toetc/nova/nova.conf.d/in thenova_computecontainer.For more information on creating
ConfigMapobjects, see Creating and using config maps in the RHOCP Nodes guide.
TipYou can use a
Secretto create the custom configuration instead if the configuration includes sensitive information, such as passwords or certificates that are required for certification.- Create a custom service for the node set. For information about how to create a custom service, see Creating and enabling a custom service.
Add the
ConfigMapCR to the custom service:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: <nodeset>-service spec: ... dataSources: - configMapRef: name: feature-configmapSpecify the
SecretCR for the cell that the node set that runs this service connects to:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: <nodeset>-service spec: ... dataSources: - configMapRef: name: feature-configmap - secretRef: name: nova-migration-ssh-key - secretRef: name: nova-cell1-compute-config
4.5. Connecting an OpenStackDataPlaneNodeSet CR to a Compute cell Copy linkLink copied to clipboard!
Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1. If you added additional Compute cells to your control plane, you must specify to which cell the node set connects.
Procedure
Create a custom
novaservice that includes theSecretcustom resource (CR) for the cell to connect to:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: <nova_cell_custom> spec: playbook: osp.edpm.nova ... dataSources: - secretRef: name: <cell_secret_ref> edpmServiceType: nova-
Replace
<nova_cell_custom>with a name for the custom service, for example,nova-cell1-custom. -
Replace
<cell_secret_ref>with theSecretCR generated by the control plane for the cell, for example,nova-cell1-compute-config.
For information about how to create a custom service, see Creating and enabling a custom service.
-
Replace
If you configured each cell with a dedicated
novametadata API service, create a customneutron-metadataservice for each cell that includes theSecretCR for connecting to the cell:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: <neutron_cell_metadata_custom> spec: playbook: osp.edpm.neutron_metadata ... dataSources: - secretRef: name: neutron-ovn-metadata-agent-neutron-config - secretRef: name: <cell_metadata_secret_ref> edpmServiceType: neutron-metadata-
Replace
<neutron_cell_metadata_custom>with a name for the custom service, for example,neutron-cell1-metadata-custom. -
Replace
<cell_metadata_secret_ref>with theSecretCR generated by the control plane for the cell, for example,nova-cell1-metadata-neutron-config.
-
Replace
-
Open the
OpenStackDataPlaneNodeSetCR file for the cell node set, for example,openstack_cell1_node_set.yaml. Replace the
novaservice in yourOpenStackDataPlaneNodeSetCR with your customnovaservice for the cell:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-cell1 spec: services: - download-cache - redhat - bootstrap - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - ovn - libvirt - *nova-cell1-custom* - telemetryNoteDo not change the order of the default services.
If you created a custom
neutron-metadataservice, add it to the list of services or replace theneutron-metadataservice with your custom service for the cell:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-cell1 spec: services: - download-cache - redhat - bootstrap - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - ovn - libvirt - nova-cell-custom - *neutron-cell1-metadata-custom* - telemetry-
Complete the configuration of your
OpenStackDataPlaneNodeSetCR. For more information, see Creating the data plane. -
Save the
OpenStackDataPlaneNodeSetCR definition file. Create the data plane resources:
$ oc create -f openstack_cell1_node_set.yamlVerify that the data plane resources have been created by confirming that the status is
SetupReady:$ oc wait openstackdataplanenodeset openstack-cell1 --for condition=SetupReady --timeout=10mWhen the status is
SetupReadythe command returns acondition metmessage, otherwise it returns a timeout error.For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.
Verify that the
Secretresource was created for the node set:$ oc get secret | grep openstack-cell1 openstack_cell1_node_set Opaque 1 3m50sVerify the services were created:
$ oc get openstackdataplaneservice -n openstack | grep nova-cell1-custom-
Create an
OpenStackDataPlaneDeploymentCR to deploy theOpenStackDataPlaneNodeSetCR. For more information, see Deploying the data plane in the Deploying Red Hat OpenStack Services on OpenShift guide.
4.6. Limiting the ports available to the OpenStackProvisionServer CR Copy linkLink copied to clipboard!
Create a custom OpenStackProvisionServer custom resource (CR) to limit the ports opened during RHOSO installation and deployment. By default, the port range used is 6190-6220.
Procedure
Create a file on your workstation to define the
OpenStackProvisionServerCR, for example,my_os_provision_server.yaml:apiVersion: baremetal.openstack.org/v1beta1 kind: OpenStackProvisionServer metadata: name: my-os-provision-server spec: interface: enp1s0 port: 61951 osImage: edpm-hardened-uefi.qcow2-
port: Specifies the port that you want to open. Must be in theOpenStackProvisionServerCR range: 6190 - 6220.
-
Create the
OpenStackProvisionServerCR:$ oc create -f my_os_provision_server.yaml -n openstack
4.7. Registering third-party nodes with the DNS server Copy linkLink copied to clipboard!
The Red Hat OpenStack Services on OpenShift (RHOSO) DNS server is configured only for data plane nodes. If the data plane nodes must resolve third-party nodes that cannot be resolved by the upstream DNS server that the dnsmasq service is configured to forward requests to, then you can register the third-party nodes with the same DNS instance that the data plane nodes are configured with.
To register third-party nodes, you create DNSData custom resources (CRs). Creating a DNSData CR updates the DNS configuration and restarts the dnsmasq pods that can then read and resolve the DNS information in the associated DNSData CR.
All nodes must be able to resolve the hostnames of the Red Hat OpenShift Container Platform (RHOCP) pods, for example, by using the external IP of the dnsmasq service.
Procedure
Create a file on your workstation named
host_dns_data.yaml to define the `DNSDataCR:apiVersion: network.openstack.org/v1beta1 kind: DNSData metadata: name: my-dnsdata namespace: openstackDefine the hostnames and IP addresses of each host:
spec: hosts: - hostnames: - my-host.some.domain - same-host.some.domain ip: 10.1.1.1 - hostnames: - my-other-host.some.domain ip: 10.1.1.2-
hosts.hostnames: Lists the hostnames that can be used to access the third-party node. -
hosts.ip: Defines the IP address of the third-party node to which the hostname resolves.
-
Create the
DNSDataCR:$ oc apply -f host_dns_data.yaml -n openstack
4.8. Configuring proxies for data plane nodes Copy linkLink copied to clipboard!
The edpm_bootstrap_command Ansible variable is used to configure system proxy settings. This variable passes shell commands to be executed in the deployment of the bootstrap service of the node. If the services list is customized with services that execute prior to bootstrap, then the commands specified by edpm-bootstrap-command run after the custom services.
Procedure
-
Open the
OpenStackDataPlaneNodeSetCR definition file for the node set you want to update, for example,openstack_data_plane.yaml. -
Locate the
ansibleVarssection of the definition file. Use the
edpm_bootstrap_commandvariable to append proxy values to the/etc/environmentfile on the node. The following is an example of using the variable for this purpose:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-data-plane namespace: openstack spec: ... ansibleVars: edpm_bootstrap_command: | set -e cat >>/etc/environment <<EOF http_proxy=<http_proxy> https_proxy=<https_proxy> no_proxy=<no_proxy>where:
- set -e
-
The
set -eflag forces theedpm_bootstrap_commandsequence to exit immediately if any command fails. This prevents the system from treating a partial or corrupted configuration as a success. - http_proxy
- The proxy that you want to use for standard HTTP requests.
- https_proxy
- The proxy that you want to use for HTTPs requests.
- no_proxy
- A comma-separated list of domains that you want to exclude from proxy communications.
-
Save the
OpenStackDataPlaneNodeSetCR definition file. Apply the updated
OpenStackDataPlaneNodeSetCR configuration:$ oc apply -f openstack_data_plane.yamlVerify that the data plane resource has been updated by confirming that the status is
SetupReady:$ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10mWhen the status is
SetupReadythe command returns acondition metmessage, otherwise it returns a timeout error.For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.
-
If there are any failed
OpenStackDataPlaneDeploymentCRs in your environment, remove them to allow a newOpenStackDataPlaneDeploymentto run Ansible with an updated Secret. Create a file on your workstation to define the new
OpenStackDataPlaneDeploymentCR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: <node_set_deployment_name>-
Replace
<node_set_deployment_name>with the name of theOpenStackDataPlaneDeploymentCR. The name must be unique, must consist of lower case alphanumeric characters,-(hyphen) or.(period), and must start and end with an alphanumeric character.
TipGive the definition file and the
OpenStackDataPlaneDeploymentCR unique and descriptive names that indicate the purpose of the modified node set.-
Replace
Add the
OpenStackDataPlaneNodeSetCR that you modified:spec: nodeSets: - <nodeSet_name>-
Save the
OpenStackDataPlaneDeploymentCR deployment file. Deploy the modified
OpenStackDataPlaneNodeSetCR:$ oc create -f openstack_data_plane_deploy.yaml -n openstackYou can view the Ansible logs while the deployment executes:
$ oc get pod -l app=openstackansibleee -w $ oc logs -l app=openstackansibleee -f --max-log-requests 10If the
oc logscommand returns an error similar to the following error, increase the--max-log-requestsvalue:error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limitVerify that the modified
OpenStackDataPlaneNodeSetCR is deployed:$ oc get openstackdataplanedeployment -n openstack NAME STATUS MESSAGE openstack-data-plane True Setup Complete $ oc get openstackdataplanenodeset -n openstack NAME STATUS MESSAGE openstack-data-plane True NodeSet ReadyFor information about the meaning of the returned status, see Data plane conditions and states in the Deploying Red Hat OpenStack Services on OpenShift guide.
If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment in the Deploying Red Hat OpenStack Services on OpenShift guide.
If you added a new node to the node set, then map the node to the Compute cell it is connected to:
$ oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verboseIf you did not create additional cells, this command maps the Compute nodes to
cell1.Access the remote shell for the
openstackclientpod and verify that the deployed Compute nodes are visible on the control plane:$ oc rsh -n openstack openstackclient $ openstack hypervisor list
4.9. Deploying data plane nodes using Image Mode (bootc) images Copy linkLink copied to clipboard!
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
You can use Image Mode (bootc) to build, deploy, and manage data plane images as containers, as an alternative to RPM-based approaches.
Consider the following points when deciding to use Image Mode images for this purpose:
- You must always reboot the node to use Image Mode images.
-
The
edpm_update_servicesservice is version dependent onopenstack_selinux. Ifedpm_update_servicesrequires a new version ofopenstack-selinux, build an updated Image Mode image and switch to the updated image. - You should perform system updates during planned maintenance windows because of the requirement to reboot a node for it to use an updated image.
- Ensure the Image Mode image is accessible from all nodes before starting an update.
- Nodes using an Image Mode image cannot update individual RPM packages. All updates must be included in an updated Image Mode image.
- Image mode supports rollback to a previous image if an issue occurs with a new image.
- After an updated node reboots, you should always verify the new image is active.
For information about updating data plane nodes using Image Mode images, see Updating Image Mode (bootc) data plane nodes in Updating your environment to the latest maintenance release.
4.9.1. Preparing the build host Copy linkLink copied to clipboard!
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
Prepare the build host to build the Image Mode (bootc) images and push the images to the container registry.
Prerequisites
- The build host is using Red Hat Enterprise Linux (RHEL) 9.4 or later.
- The build host has a minimum of 10GB of free disk space.
-
The user has
sudoaccess for privileged container operations. - The user has container registry push permissions.
Procedure
- Log in to the build host.
Install the packages necessary to perform build operations:
$ sudo dnf install -y podman buildah osbuild-selinux- Create a new file to store the environment variables for your build environment.
Add the following environment variables to your file:
# Container registry and image names export EDPM_BOOTC_REPO="<container_registry_url>" export EDPM_BOOTC_TAG="latest" export EDPM_BOOTC_IMAGE="${EDPM_BOOTC_REPO}:${EDPM_BOOTC_TAG}" export EDPM_QCOW2_IMAGE="${EDPM_BOOTC_REPO}:${EDPM_BOOTC_TAG}-qcow2" # Build configuration export EDPM_BASE_IMAGE="registry.redhat.io/rhel9/rhel-bootc:9.4" export EDPM_CONTAINERFILE="Containerfile" export RHSM_SCRIPT="<script_name>" export FIPS="<fips_setting>" export USER_PACKAGES="<additional_package_list>"-
Replace
<container_registry_url>with the URL of your container registry. -
Replace
<script_name>with the Red Hat Subscription Management (RHSM) script. Userhsm.shif you have a RHEL subscription andempty.shif you do not have a subscription. -
Replace
<fips_setting>with the FIPS mode value. Use1to enable FIPS mode and0to disable FIPS mode. -
Replace
<additional_package_list>with a space separated list of additional packages to install. If there are no additional packages to install, use a set of empty quotes ("") for this value.
-
Replace
- Save the file with your build environment variables.
Source the file with your build environment variables:
$ source ~/<build_variable_file>-
Replace
<build_variable_file>with the name of the file you created to contain your environment files.
-
Replace
Navigate to the EDPM image builder directory:
$ cd edpm-image-builderDownload the necessary
edpm-image-builderfiles:$ mkdir -p bootc $ pushd bootc $ curl -O https://raw.githubusercontent.com/openstack-k8s-operators/edpm-image-builder/ee5219d7df4772586105649e03d1f545c9e4d653/bootc/Containerfile $ curl -O https://raw.githubusercontent.com/openstack-k8s-operators/edpm-image-builder/ee5219d7df4772586105649e03d1f545c9e4d653/bootc/rhsm.sh $ chmod +x rhsm.sh $ mkdir -p ansible-facts $ pushd ansible-facts $ curl -O https://raw.githubusercontent.com/openstack-k8s-operators/edpm-image-builder/ee5219d7df4772586105649e03d1f545c9e4d653/bootc/ansible-facts/bootc.fact $ popd $ popd $ curl -O https://raw.githubusercontent.com/openstack-k8s-operators/edpm-image-builder/ee5219d7df4772586105649e03d1f545c9e4d653/Containerfile.image $ curl -O https://raw.githubusercontent.com/openstack-k8s-operators/edpm-image-builder/ee5219d7df4772586105649e03d1f545c9e4d653/copy_out.sh $ chmod +x copy_out.shNavigate to the
bootcdirectory:$ cd bootcCreate the output directory:
$ mkdir -p output/yum.repos.dPerform the following steps to modify
rhsm.shfor RHEL-based builds with Subscription Manager:-
Open
rhsm.shin a text editor. Edit the subscription variables:
RHSM_USER=<rhsm_username> RHSM_PASSWORD=<rhsm_password> RHSM_POOL=<rhsm_pool_id>-
Replace
<rhsm_username>with your Subscription Manager username. -
Replace
<rhsm_password>with your Subscription Manager password. Replace
<rhsm_pool_id>with your Subscription Manager Pool ID if SCA is disabled.NoteAdditional edits of
rhsm.shmight be necessary depending on your environment. For example, these can include running a Subscription Manager command with an activation key or using any custom scripts to enable needed repositories.
-
Replace
-
Open
- Save and close the file.
Set the
RHSM_SCRIPTenvironment variable to your edited script file:$ export RHSM_SCRIPT="rhsm.sh"
4.9.2. Building the Image Mode (bootc) container image Copy linkLink copied to clipboard!
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
Build the Image Mode (bootc) image so it can be pushed to the container registry.
Prerequisites
- You have completed the steps in Preparing the build host to prepare the build host.
Procedure
- Log in to the build host.
Navigate to the
bootcdirectory:$ cd bootcLog in to
registry.redhat.io:$ sudo podman login registry.redhat.ioBuild the Image Mode container image:
$ sudo buildah bud \ --build-arg EDPM_BASE_IMAGE=${EDPM_BASE_IMAGE} \ --build-arg RHSM_SCRIPT=${RHSM_SCRIPT} \ --build-arg FIPS=${FIPS} \ --build-arg USER_PACKAGES="${USER_PACKAGES}" \ --volume /etc/pki/ca-trust:/etc/pki/ca-trust:ro,Z \ --volume $(pwd)/output/yum.repos.d:/etc/yum.repos.d:rw,Z \ -f ${EDPM_CONTAINERFILE} \ -t ${EDPM_BOOTC_IMAGE} \ .Verify the container image was built successfully:
$ sudo podman images | grep ${EDPM_BOOTC_REPO}Push the container image to the container registry:
$ sudo podman push ${EDPM_BOOTC_IMAGE}
4.9.3. Optional: Building a Image Mode (bootc) QCOW2 container image Copy linkLink copied to clipboard!
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
Build a QCOW2 Image Mode (bootc) image so it can be used for bare-metal deployments. This is an optional procedure that is not required for all deployments.
Prerequisites
- You have completed the steps in Preparing the build host to prepare the build host.
Procedure
- Log in to the build host.
Navigate to the
bootcdirectory:$ cd bootcSet the
BUILDER_IMAGEenvironment variable to use RHEL. The following is an example of performing this task:$ export BUILDER_IMAGE="registry.redhat.io/rhel9/bootc-image-builder:latest"Generate the QCOW2 disk image:
$ sudo podman run \ --rm \ -it \ --privileged \ --security-opt label=type:unconfined_t \ -v ./output:/output \ -v /var/lib/containers/storage:/var/lib/containers/storage \ ${BUILDER_IMAGE} \ --type qcow2 \ --local \ ${EDPM_BOOTC_IMAGE}Move the QCOW2 image to the build directory and confirm the image checksum value:
$ pushd output $ sudo mv qcow2/disk.qcow2 edpm-bootc.qcow2 $ sudo sha256sum edpm-bootc.qcow2 > edpm-bootc.qcow2.sha256 $ popdPrepare the packaging files:
$ cp ../copy_out.sh output/ $ cp ../Containerfile.image output/Set the
BASE_IMAGEenvironment variable to use RHEL. The following is an example of performing this task:$ export BASE_IMAGE=registry.redhat.io/rhel9-4-els/rhel:9.4Build the QCOW2 container image:
$ pushd output $ sudo buildah bud \ --build-arg IMAGE_NAME=edpm-bootc \ --build-arg BASE_IMAGE=${BASE_IMAGE} \ -f ./Containerfile.image \ -t ${EDPM_QCOW2_IMAGE} \ . $ popdPush the QCOW2 container image to the container registry:
$ sudo podman push ${EDPM_QCOW2_IMAGE}
4.9.4. Example OpenStackDataPlaneNodeSet CR configuration for QCOW2 container image Copy linkLink copied to clipboard!
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
The following example configuration demonstrates how to use a QCOW2 container image in an OpenStackDataPlaneNodeSet CR:
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
name: edpm-compute-bootc
namespace: openstack
spec:
preProvisioned: False
baremetalSetTemplate:
osImage: <qcow2_image_filename>
osContainerImageUrl: <qcow2_image_url>
-
Replace
<qcow2_image_filename>with the filename of the QCOW2 image to be extracted from the container. This should match the filename inside your QCOW2 container image. For example,edpm-bootc.qcow2. -
Replace
<qcow2_image_url>with the full URL to the QCOW2 container image. This image has a-qcow2suffix. For example,your-registry.example.com/edpm-bootc:latest-qcow2.
For more information about configuring and deploying an OpenStackDataPlaneNodeSet CR, see Creating the data plane in Deploying Red Hat OpenStack Services on OpenShift.
4.9.5. Image building customizations Copy linkLink copied to clipboard!
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
The image building process can be customized in many ways based on your environment and needs. The following customizations are available to modify the image building process:
- Federal Information Processing Standards (FIPS) configuration
Disable FIPS mode by setting the
FIPSenvironment variable to0:$ export FIPS="0"- Package customizaton
The container file defines several package environment variables that you can customize by modifying the build arguments. Key package categories include:
-
BOOTSTRAP_PACKAGES: Core system packages -
OVS_PACKAGES: Open vSwitch packages -
PODMAN_PACKAGES: Container runtime packages -
LIBVIRT_PACKAGES: Virtualization packages -
CEPH_PACKAGES: Ceph storage packages USER_PACKAGES: User-defined packages for additional functionalityThe
USER_PACKAGESenvironment variable allows you to inject additional packages into the Image Mode (bootc) image during build time. This is useful for adding site-specific tools, drivers, or utilities that are not included in the default package sets.Define the value of
USER_PACKAGESbefore building the image to inject the additional packages into the completed image.The following example sets the value of
USER_PACKAGESto add custom packages during the build and then executes thebuildahcommand with the customizations included:$ export USER_PACKAGES="vim htop strace tcpdump" $ sudo buildah bud \ --build-arg EDPM_BASE_IMAGE=${EDPM_BASE_IMAGE} \ --build-arg RHSM_SCRIPT=${RHSM_SCRIPT} \ --build-arg FIPS=${FIPS} \ --build-arg USER_PACKAGES="${USER_PACKAGES}" \ --volume /etc/pki/ca-trust:/etc/pki/ca-trust:ro,Z \ --volume $(pwd)/output/yum.repos.d:/etc/yum.repos.d:rw,Z \ -f ${EDPM_CONTAINERFILE} \ -t ${EDPM_BOOTC_IMAGE} \ .The following are some additional example values for
USER_PACKAGES:Add debugging and monitoring tools:
$ export USER_PACKAGES="vim htop strace tcpdump iperf3 curl wget"Add storage-related utilities:
$ export USER_PACKAGES="lvm2 multipath-tools sg3_utils"Add network debugging tools:
export USER_PACKAGES="nmap netcat-openbsd traceroute mtr"Add development tools:
export USER_PACKAGES="git gcc make python3-pip"Note- Package names must be valid for the base image OS repository.
- Packages must be available in the configured repositories.
- Additional packages increase the image size.
-
4.9.6. Troubleshooting the image building process Copy linkLink copied to clipboard!
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
The following are errors you might encounter during the image building process and associated troubleshooting information:
- Build fails with permission errors
-
If a build fails with permission errors, ensure you are using
sudowith thebuildahandpodmancommands for privileged operations. - Package installation errors
If a package installation fails, verify your repository configuration:
$ ls -la output/yum.repos.d/- Container registry authentication errors
If container registry authentication fails, ensure you are logged in to your container registry:
$ sudo podman login <registry_url>-
Replace
<registry_url>with the URL of your container registry.
-
Replace
- Storage space errors
If you receive errors about storage space, delete any unused images:
$ sudo podman system prune -aNoteImage mode (bootc) images can take a larger amount of storage space than other images. You should proactively ensure storage space is not occupied by unused image files before errors occur.
4.10. Using custom images with a custom Provision Server Copy linkLink copied to clipboard!
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
By default, the OpenStackBaremetalSet CR automatically creates an OpenStackProvisionServer CR for each node set that serves the bundled operating system image for node provisioning. You can create a custom OpenStackProvisionServer CR that serves a different operating system image.
Prerequisites
A QCOW2 container image that includes all the packages required for data plane deployment, such as the following:
- The QCOW2 disk image and checksum files.
-
An endpoint script, such as
copy_out.sh, that copies the QCOW2 container image to the directory specified in theDEST_DIRenvironment variable.
Procedure
Generate a checksum file for the container image appropriate for your environment:
Generate a SHA256 checksum file:
$ sha256sum <image_file_name>.qcow2 > <image_file_name>.qcow2.sha256sumGenerate a MD5 checksum file:
$ md5sum <image_file_name>.qcow2 > <image_file_name>.qcow2.md5sumGenerate a SHA512 checksum file:
$ sha512sum <image_file_name>.qcow2 > <image_file_name>.qcow2.sha512sum
Clone the
edpm-image-builderrepository:$ git clone https://github.com/openstack-k8s-operators/edpm-image-builder.gitNavigate to the
edpm-image-builderdirectory:$ cd edpm-image-builderCreate the container image file in the
edpm-image-builderdirectory:$ FROM registry.access.redhat.com/ubi9/ubi-minimal:9.6 $ COPY <image_file_name>.qcow2 / $ COPY <image_file_name>.qcow2.sha256sum / $ COPY copy_out.sh /copy_out.sh $ RUN chmod +x /copy_out.sh $ ENTRYPOINT ["/copy_out.sh"]where:
<image_file_name>Specifies the name of your QCOW2 container image.
Note-
The files are copied to your root directory because the default
SRC_DIRforcopy_out.shis/. You can setENV SRC_DIR=<path>in your container file if you want to use a different source directory. -
You can use compressed (
.qcow2.gz) or uncompressed (.qcow2) images with thecopy_out.shscript.
-
The files are copied to your root directory because the default
Build and push the container image:
$ buildah bud -f Containerfile -t <your_registry>/my-custom-os-image:latest $ buildah push <your_registry>/my-custom-os-image:latestwhere:
<your_registry>- Specifies the URL to your container registry.
Create an
OpenStackProvisionServerCR that defines your custom provisioning server:apiVersion: baremetal.openstack.org/v1beta1 kind: OpenStackProvisionServer metadata: name: openstackprovisionserver spec: interface: <connection_interface> port: <connection_port> osImage: <image_file_name>.qcow2 osContainerImageUrl: <your_registry>/my-custom-os-image:latest apacheImageUrl: registry.redhat.io/ubi9/httpd-24:latest agentImageUrl: quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latestwhere:
<connection_interface>- Specifies the interface to use to connect to the custom provisioning server.
<connection_port>- Specifies the port to use to connect to the custom provisioning server.
<image_file_name>- Specifies the name of your QCOW2 container image.
<your_registry>- Specifies the URL to your container registry.
-
Save your custom
OpenStackProvisionServerCR file. Apply the
OpenStackProvisionServerCR configuration:$ oc apply -f <provision_server_cr_file>where:
<provision_server_cr_file>-
Specifies the file name of the new
OpenStackProvisionServerCR file that you created.
Add your custom
OpenStackProvisionServerCR to the applicableOpenStackDataPlaneNodeSetCRs. The following is an example of anOpenStackDataPlaneNodeSetCR that uses a customOpenStackProvisionServerCR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: example-nodeset spec: baremetalSetTemplate: provisionServerName: <provision_server_name> osImage: <image_file_name>.qcow2 deploymentSSHSecret: <custom_ssh_secret> ctlplaneInterface: <ctrl_plane_interface> nodes: edpm-compute-0: hostName: edpm-compute-0where:
<provision_server_name>-
Specifies the name of your custom
OpenStackProvisionServerCR. <image_file_name>- Specifies the name of your QCOW2 container image.
<custom_ssh_secret>- Specifies the SSH secret to use for deployment.
<ctrl_plane_interface>- Specifies the control plane interface to use for provisioning.
Apply the updated
OpenStackDataPlaneNodeSetCR configuration:$ oc apply -f openstack_data_plane.yamlVerify that the data plane resource has been updated by confirming that the status is
SetupReady:$ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10mWhen the status is
SetupReadythe command returns acondition metmessage, otherwise it returns a timeout error.For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.