Chapter 4. Customizing the data plane


You can customize your data plane by adding additional node sets, modifying existing OpenStackDataPlaneNodeSet CRs, adding Compute cells, and creating custom services.

To add additional node sets to your data plane, use the procedures in Creating the data plane in the Deploying Red Hat OpenStack Services on OpenShift guide.

Warning

To prevent database corruption, do not edit the name of any cell in the custom resource definition (CRD).

4.1. Prerequisites

  • The RHOSO environment is deployed on a Red Hat OpenShift Container Platform (RHOCP) cluster. For more information, see Deploying Red Hat OpenStack Services on OpenShift.
  • You are logged on to a workstation that has access to the RHOCP cluster as a user with cluster-admin privileges.
Note

Red Hat OpenStack Services on OpenShift (RHOSO) supports external deployments of Red Hat Ceph Storage 7, 8, and 9. Configuration examples that reference Red Hat Ceph Storage use Release 7 information. If you are using a later version of Red Hat Ceph Storage, adjust the configuration examples accordingly.

4.2. Modifying an OpenStackDataPlaneNodeSet CR

Modify an existing OpenStackDataPlaneNodeSet custom resource (CR) to update node configurations or add new nodes to your data plane. Use a new OpenStackDataPlaneDeployment CR to apply these modifications to the data plane.

Note

You must create a new OpenStackDataPlaneDeployment CR to start another Ansible execution that applies any changes you made to the data plane. This is because when an OpenStackDataPlaneDeployment CR successfully completes execution, it does not automatically execute the Ansible again, even if the OpenStackDataPlaneDeployment CR or related OpenStackDataPlaneNodeSet resources are changed.

Procedure

  1. Open the OpenStackDataPlaneNodeSet CR definition file for the node set you want to update, for example, openstack_data_plane.yaml.
  2. Update or add the configuration you require. For information about the properties you can use to configure common node attributes, see OpenStackDataPlaneNodeSet CR spec properties in the Deploying Red Hat OpenStack Services on OpenShift guide.
  3. Save the OpenStackDataPlaneNodeSet CR definition file.
  4. Apply the updated OpenStackDataPlaneNodeSet CR configuration:

    $ oc apply -f openstack_data_plane.yaml
  5. Verify that the data plane resource has been updated by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m

    When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error.

    For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

  6. If there are any failed OpenStackDataPlaneDeployment CRs in your environment, remove them to allow a new OpenStackDataPlaneDeployment to run Ansible with an updated Secret.
  7. Create a file on your workstation to define the new OpenStackDataPlaneDeployment CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: <node_set_deployment_name>
    • Replace <node_set_deployment_name> with the name of the OpenStackDataPlaneDeployment CR. The name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character.
    Tip

    Give the definition file and the OpenStackDataPlaneDeployment CR unique and descriptive names that indicate the purpose of the modified node set.

  8. Add the OpenStackDataPlaneNodeSet CR that you modified:

    spec:
      nodeSets:
        - <nodeSet_name>
  9. Save the OpenStackDataPlaneDeployment CR deployment file.
  10. Deploy the modified OpenStackDataPlaneNodeSet CR:

    $ oc create -f openstack_data_plane_deploy.yaml -n openstack

    You can view the Ansible logs while the deployment executes:

    $ oc get pod -l app=openstackansibleee -w
    $ oc logs -l app=openstackansibleee -f --max-log-requests 10

    If the oc logs command returns an error similar to the following error, increase the --max-log-requests value:

    error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
  11. Verify that the modified OpenStackDataPlaneNodeSet CR is deployed:

    $ oc get openstackdataplanedeployment -n openstack
    NAME             	STATUS   MESSAGE
    openstack-data-plane   True     Setup Complete
    
    
    $ oc get openstackdataplanenodeset -n openstack
    NAME             	STATUS   MESSAGE
    openstack-data-plane   True     NodeSet Ready

    For information about the meaning of the returned status, see Data plane conditions and states in the Deploying Red Hat OpenStack Services on OpenShift guide.

    If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment in the Deploying Red Hat OpenStack Services on OpenShift guide.

  12. If you added a new node to the node set, then map the node to the Compute cell it is connected to:

    $ oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose

    If you did not create additional cells, this command maps the Compute nodes to cell1.

    Access the remote shell for the openstackclient pod and verify that the deployed Compute nodes are visible on the control plane:

    $ oc rsh -n openstack openstackclient
    $ openstack hypervisor list

4.3. Data plane services

A data plane service is an Ansible execution that manages the installation, configuration, and execution of a software deployment on data plane nodes. Each service is a resource instance of the OpenStackDataPlaneService custom resource definition (CRD), which combines Ansible content and configuration data from ConfigMap and Secret CRs. You specify the Ansible execution for your service with Ansible play content, which can be an Ansible playbook from edpm-ansible, or any Ansible play content. The ConfigMap and Secret CRs can contain any configuration data that needs to be consumed by the Ansible content.

The OpenStack Operator provides core services that are deployed by default on data plane nodes. If you omit the services field from the OpenStackDataPlaneNodeSet specification, then the following services are applied by default in the following order:

services:
- redhat
- bootstrap
- download-cache
- configure-network
- validate-network
- install-os
- configure-os
- ssh-known-hosts
- run-os
- reboot-os
- install-certs
- ovn
- neutron-metadata
- libvirt
- nova
- telemetry

The OpenStack Operator also includes the following services that are not enabled by default:

Expand
ServiceDescription

ceph-client

Include this service to configure data plane nodes as clients of a Red Hat Ceph Storage server. Include between the install-os and configure-os services. The OpenStackDataPlaneNodeSet CR must include the following configuration to access the Red Hat Ceph Storage secrets:

apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
spec:
  ...
  nodeTemplate:
    extraMounts:
    - extraVolType: Ceph
      volumes:
      - name: ceph
        secret:
          secretName: ceph-conf-files
      mounts:
      - name: ceph
        mountPath: "/etc/ceph"
        readOnly: true

ceph-hci-pre

Include this service to prepare data plane nodes to host Red Hat Ceph Storage in a HCI configuration. For more information, see Deploying a hyperconverged infrastructure environment.

neutron-dhcp

Include this service to run a Neutron DHCP agent on the data plane nodes.

neutron-ovn

Include this service to run the Neutron OVN agent on the data plane nodes. This agent is required to provide QoS to hardware offloaded ports on the Compute nodes.

neutron-sriov

Include this service to run a Neutron SR-IOV NIC agent on the data plane nodes.

telemetry-power-monitoring

Include this service to gather metrics for power consumption on the data plane nodes.

Important

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

For more information about the available default services, see https://github.com/openstack-k8s-operators/openstack-operator/tree/main/config/services.

You can enable and disable services for an OpenStackDataPlaneNodeSet resource.

Note

Do not change the order of the default service deployments.

You can use the OpenStackDataPlaneService CRD to create a custom service that you can deploy on your data plane nodes. You add your custom service to the default list of services where the service must be executed. For more information, see Creating and enabling a custom service.

You can view the details of a service by viewing the YAML representation of the resource:

$ oc get openstackdataplaneservice configure-network -o yaml -n openstack

4.3.1. Creating and enabling a custom service

Create custom data plane services by using the OpenStackDataPlaneService custom resource definition (CRD) to deploy additional or non-default software configurations or Ansible content on your data plane nodes.

Note

Do not create a custom service with the same name as one of the default services. If a custom service name matches a default service name, the default service values overwrite the custom service values during OpenStackDataPlaneNodeSet reconciliation.

You specify the Ansible execution for your service with either an Ansible playbook or by including the free-form playbook contents directly in the playbookContents section of the service.

Note

You cannot include an Ansible playbook and playbookContents in the same service.

Procedure

  1. Create an OpenStackDataPlaneService CR and save it to a YAML file on your workstation, for example custom-service.yaml:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: custom-service
    spec:
  2. Specify the Ansible commands to create the custom service, by referencing an Ansible playbook or by including the Ansible play in the playbookContents field:

    • Specify the Ansible playbook to use:

      apiVersion: dataplane.openstack.org/v1beta1
      kind: OpenStackDataPlaneService
      metadata:
        name: custom-service
      spec:
        playbook: osp.edpm.configure_os
    • Specify the Ansible play in the playbookContents field as a string that uses Ansible playbook syntax:

      apiVersion: dataplane.openstack.org/v1beta1
      kind: OpenStackDataPlaneService
      metadata:
        name: custom-service
      spec:
        playbookContents: |
          - hosts: all
            tasks:
              - name: Hello World!
                shell: "echo Hello World!"
                register: output
              - name: Show output
                debug:
                  msg: "{{ output.stdout }}"
              - name: Hello World role
                import_role: hello_world

      For information about how to create an Ansible playbook, see Creating a playbook.

  3. Specify the edpmServiceType field for the service. You can have different custom services that use the same Ansible content to manage the same data plane service, for example, ovn or nova. The DataSources, TLS certificates, and CA certificates must be mounted at the same locations so that Ansible content can locate them and re-use the same paths for a custom service. You use the edpmServiceType field to create this association. The value is the name of the default service that uses the same Ansible content as the custom service. For example, if you have a custom service that uses the edpm_ovn Ansible content from edpm-ansible, you set edpmServiceType to ovn, which matches the default ovn service name provided by the OpenStack Operator.

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: custom-service
    spec:
      ...
      edpmServiceType: ovn
    Note

    The acroynm edpm used in field names stands for "External Data Plane Management".

  4. Optional: To override the default container image used by the ansible-runner execution environment with a custom image that uses additional Ansible content for a custom service, build and include a custom ansible-runner image. For information, see Building a custom ansible-runner image.
  5. Optional: Specify the names of Secret or ConfigMap resources to use to pass secrets or configurations into the OpenStackAnsibleEE job:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: custom-service
    spec:
      ...
      playbookContents: |
        ...
      dataSources:
        - configMapRef:
            name: hello-world-cm-0
        - secretRef:
            name: hello-world-secret-0
        - secretRef:
            name: hello-world-secret-1
            optional: true
    • datasources.secretRef.optional: An optional field that, when set to "true", marks the resource as optional so that an error is not thrown if it doesn’t exist.

      A mount is created for each Secret and ConfigMap CR in the OpenStackAnsibleEE pod with a filename that matches the resource value. The mounts are created under /var/lib/openstack/configs/<service name>. You can then use Ansible content to access the configuration or secret data.

  6. Optional: Set the deployOnAllNodeSets field to true if the service must run on all node sets in the OpenStackDataPlaneDeployment CR, even if the service is not listed as a service in every node set in the deployment:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: custom-service
    spec:
    
      playbookContents: |
      ...
      deployOnAllNodeSets: true
  7. Create the custom service:

    $ oc apply -f custom-service.yaml -n openstack
  8. Verify that the custom service is created:

    $ oc get openstackdataplaneservice <custom_service_name> -o yaml -n openstack
  9. Add the custom service to the services field in the definition file for the node sets the service applies to. Add the service name in the order that it should be executed relative to the other services. If the deployAllNodeSets field is set to true, then you need to add the service to only one of the node sets in the deployment.

    Note

    When adding your custom service to the services list in a node set definition, you must include all the required services, including the default services. If you include only your custom service in the services list, then that is the only service that is deployed.

4.3.2. Building a custom ansible-runner image

You can override the default container image used by the ansible-runner execution environment with your own custom image when you need additional Ansible content for a custom service.

Procedure

  1. Create a Containerfile that adds the custom content to the default image:

    FROM quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest
    COPY my_custom_role /usr/share/ansible/roles/my_custom_role
  2. Build and push the image to a container registry:

    $ podman build -t quay.io/example_user/my_custom_image:latest .
    $ podman push quay.io/example_user/my_custom_role:latest
  3. Specify your new container image as the image that the ansible-runner execution environment must use to add the additional Ansible content that your custom service requires, such as Ansible roles or modules:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: custom-service
    spec:
      label: dataplane-deployment-custom-service
      openStackAnsibleEERunnerImage: quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest
      playbookContents: |
    • openstack-ansibleee-runner: Your container image that the ansible-runner execution environment uses to execute Ansible.

You can configure a node set to designate it for a particular feature or workload.

Note

The Compute service (nova) provides a default ConfigMap CR named nova-extra-config, where you can add generic configuration that applies to all the node sets that use the default nova service. If you use this default nova-extra-config ConfigMap to add generic configuration to be applied to all the node sets, then you do not need to create a custom service.

Procedure

  1. Create a ConfigMap CR that defines a new configuration file for the feature:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: feature-configmap
      namespace: openstack
    data:
      <integer>-<feature>.conf: |
        <[config_grouping]>
        <config_option> = <value>
        <config_option> = <value>
    Note

    If you are using the default ConfigMap CR for the Compute service named nova-extra-config or any other ConfigMap or Secret intended to pass configuration options to the nova-compute service on the EDPM node, you must configure the target configuration filename to match nova.conf, for example, <integer>-nova-<feature>.conf. For more information, see Configuring the Compute service (nova) in Configuring the Compute service for instance creation.

    • Replace <integer> with a number that indicates when to apply the configuration. The control plane services apply every file in their service directory, /etc/<service>/<service>.conf.d/, in lexicographical order. Therefore, configurations defined in later files override the same configurations defined in an earlier file. Each service operator generates the default configuration file with the name 01-<service>.conf. For example, the default configuration file for the nova-operator is 01-nova.conf.

      Note

      Numbers below 25 are reserved for the OpenStack services and Ansible configuration files.

    • Replace <feature> with a string that indicates the feature being configured.

      Note

      Do not use the name of the default configuration file, because it would override the infrastructure configuration, such as the transport_url.

    • Replace <[config_grouping]> with the name of the group the configuration options belong to in the service configuration file. For example, [compute] or database.
    • Replace <config_option> with the option you want to configure, for example, cpu_shared_set.
    • Replace <value> with the value for the configuration option, for example, 2,6.

      When the service is deployed, it adds the configuration to the etc/<service>/<service>.conf.d/ directory in the service container. For example, for a Compute feature, the configuration file is added to etc/nova/nova.conf.d/ in the nova_compute container.

      For more information on creating ConfigMap objects, see Creating and using config maps in the RHOCP Nodes guide.

    Tip

    You can use a Secret to create the custom configuration instead if the configuration includes sensitive information, such as passwords or certificates that are required for certification.

  2. Create a custom service for the node set. For information about how to create a custom service, see Creating and enabling a custom service.
  3. Add the ConfigMap CR to the custom service:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: <nodeset>-service
    spec:
      ...
      dataSources:
        - configMapRef:
            name: feature-configmap
  4. Specify the Secret CR for the cell that the node set that runs this service connects to:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: <nodeset>-service
    spec:
      ...
      dataSources:
        - configMapRef:
            name: feature-configmap
        - secretRef:
            name: nova-migration-ssh-key
        - secretRef:
            name: nova-cell1-compute-config

Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1. If you added additional Compute cells to your control plane, you must specify to which cell the node set connects.

Procedure

  1. Create a custom nova service that includes the Secret custom resource (CR) for the cell to connect to:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: <nova_cell_custom>
      spec:
        playbook: osp.edpm.nova
        ...
        dataSources:
        - secretRef:
            name: <cell_secret_ref>
        edpmServiceType: nova
    • Replace <nova_cell_custom> with a name for the custom service, for example, nova-cell1-custom.
    • Replace <cell_secret_ref> with the Secret CR generated by the control plane for the cell, for example, nova-cell1-compute-config.

    For information about how to create a custom service, see Creating and enabling a custom service.

  2. If you configured each cell with a dedicated nova metadata API service, create a custom neutron-metadata service for each cell that includes the Secret CR for connecting to the cell:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: <neutron_cell_metadata_custom>
      spec:
        playbook: osp.edpm.neutron_metadata
        ...
        dataSources:
        - secretRef:
            name: neutron-ovn-metadata-agent-neutron-config
        - secretRef:
            name: <cell_metadata_secret_ref>
        edpmServiceType: neutron-metadata
    • Replace <neutron_cell_metadata_custom> with a name for the custom service, for example, neutron-cell1-metadata-custom.
    • Replace <cell_metadata_secret_ref> with the Secret CR generated by the control plane for the cell, for example, nova-cell1-metadata-neutron-config.
  3. Open the OpenStackDataPlaneNodeSet CR file for the cell node set, for example, openstack_cell1_node_set.yaml.
  4. Replace the nova service in your OpenStackDataPlaneNodeSet CR with your custom nova service for the cell:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: openstack-cell1
    spec:
      services:
        - download-cache
        - redhat
        - bootstrap
        - configure-network
        - validate-network
        - install-os
        - configure-os
        - ssh-known-hosts
        - run-os
        - ovn
        - libvirt
        - *nova-cell1-custom*
        - telemetry
    Note

    Do not change the order of the default services.

  5. If you created a custom neutron-metadata service, add it to the list of services or replace the neutron-metadata service with your custom service for the cell:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: openstack-cell1
    spec:
      services:
        - download-cache
        - redhat
        - bootstrap
        - configure-network
        - validate-network
        - install-os
        - configure-os
        - ssh-known-hosts
        - run-os
        - ovn
        - libvirt
        - nova-cell-custom
        - *neutron-cell1-metadata-custom*
        - telemetry
  6. Complete the configuration of your OpenStackDataPlaneNodeSet CR. For more information, see Creating the data plane.
  7. Save the OpenStackDataPlaneNodeSet CR definition file.
  8. Create the data plane resources:

    $ oc create -f openstack_cell1_node_set.yaml
  9. Verify that the data plane resources have been created by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset openstack-cell1 --for condition=SetupReady --timeout=10m

    When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error.

    For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

  10. Verify that the Secret resource was created for the node set:

    $ oc get secret | grep openstack-cell1
    openstack_cell1_node_set Opaque 1 3m50s
  11. Verify the services were created:

    $ oc get openstackdataplaneservice -n openstack | grep nova-cell1-custom
  12. Create an OpenStackDataPlaneDeployment CR to deploy the OpenStackDataPlaneNodeSet CR. For more information, see Deploying the data plane in the Deploying Red Hat OpenStack Services on OpenShift guide.

Create a custom OpenStackProvisionServer custom resource (CR) to limit the ports opened during RHOSO installation and deployment. By default, the port range used is 6190-6220.

Procedure

  1. Create a file on your workstation to define the OpenStackProvisionServer CR, for example, my_os_provision_server.yaml:

    apiVersion: baremetal.openstack.org/v1beta1
    kind: OpenStackProvisionServer
    metadata:
      name: my-os-provision-server
    spec:
      interface: enp1s0
      port: 6195 
    1
    
      osImage: edpm-hardened-uefi.qcow2
    • port: Specifies the port that you want to open. Must be in the OpenStackProvisionServer CR range: 6190 - 6220.
  2. Create the OpenStackProvisionServer CR:

    $ oc create -f my_os_provision_server.yaml -n openstack

The Red Hat OpenStack Services on OpenShift (RHOSO) DNS server is configured only for data plane nodes. If the data plane nodes must resolve third-party nodes that cannot be resolved by the upstream DNS server that the dnsmasq service is configured to forward requests to, then you can register the third-party nodes with the same DNS instance that the data plane nodes are configured with.

To register third-party nodes, you create DNSData custom resources (CRs). Creating a DNSData CR updates the DNS configuration and restarts the dnsmasq pods that can then read and resolve the DNS information in the associated DNSData CR.

All nodes must be able to resolve the hostnames of the Red Hat OpenShift Container Platform (RHOCP) pods, for example, by using the external IP of the dnsmasq service.

Procedure

  1. Create a file on your workstation named host_dns_data.yaml to define the `DNSData CR:

    apiVersion: network.openstack.org/v1beta1
    kind: DNSData
    metadata:
      name: my-dnsdata
      namespace: openstack
  2. Define the hostnames and IP addresses of each host:

    spec:
      hosts:
      - hostnames:
        - my-host.some.domain
        - same-host.some.domain
        ip: 10.1.1.1
      - hostnames:
        - my-other-host.some.domain
        ip: 10.1.1.2
    • hosts.hostnames: Lists the hostnames that can be used to access the third-party node.
    • hosts.ip: Defines the IP address of the third-party node to which the hostname resolves.
  3. Create the DNSData CR:

    $ oc apply -f host_dns_data.yaml -n openstack

4.8. Configuring proxies for data plane nodes

The edpm_bootstrap_command Ansible variable is used to configure system proxy settings. This variable passes shell commands to be executed in the deployment of the bootstrap service of the node. If the services list is customized with services that execute prior to bootstrap, then the commands specified by edpm-bootstrap-command run after the custom services.

Procedure

  1. Open the OpenStackDataPlaneNodeSet CR definition file for the node set you want to update, for example, openstack_data_plane.yaml.
  2. Locate the ansibleVars section of the definition file.
  3. Use the edpm_bootstrap_command variable to append proxy values to the /etc/environment file on the node. The following is an example of using the variable for this purpose:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: openstack-data-plane
      namespace: openstack
    spec:
      ...
      ansibleVars:
        edpm_bootstrap_command: |
            set -e
            cat >>/etc/environment <<EOF
            http_proxy=<http_proxy>
            https_proxy=<https_proxy>
            no_proxy=<no_proxy>

    where:

    set -e
    The set -e flag forces the edpm_bootstrap_command sequence to exit immediately if any command fails. This prevents the system from treating a partial or corrupted configuration as a success.
    http_proxy
    The proxy that you want to use for standard HTTP requests.
    https_proxy
    The proxy that you want to use for HTTPs requests.
    no_proxy
    A comma-separated list of domains that you want to exclude from proxy communications.
  4. Save the OpenStackDataPlaneNodeSet CR definition file.
  5. Apply the updated OpenStackDataPlaneNodeSet CR configuration:

    $ oc apply -f openstack_data_plane.yaml
  6. Verify that the data plane resource has been updated by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m

    When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error.

    For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

  7. If there are any failed OpenStackDataPlaneDeployment CRs in your environment, remove them to allow a new OpenStackDataPlaneDeployment to run Ansible with an updated Secret.
  8. Create a file on your workstation to define the new OpenStackDataPlaneDeployment CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: <node_set_deployment_name>
    • Replace <node_set_deployment_name> with the name of the OpenStackDataPlaneDeployment CR. The name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character.
    Tip

    Give the definition file and the OpenStackDataPlaneDeployment CR unique and descriptive names that indicate the purpose of the modified node set.

  9. Add the OpenStackDataPlaneNodeSet CR that you modified:

    spec:
      nodeSets:
        - <nodeSet_name>
  10. Save the OpenStackDataPlaneDeployment CR deployment file.
  11. Deploy the modified OpenStackDataPlaneNodeSet CR:

    $ oc create -f openstack_data_plane_deploy.yaml -n openstack

    You can view the Ansible logs while the deployment executes:

    $ oc get pod -l app=openstackansibleee -w
    $ oc logs -l app=openstackansibleee -f --max-log-requests 10

    If the oc logs command returns an error similar to the following error, increase the --max-log-requests value:

    error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
  12. Verify that the modified OpenStackDataPlaneNodeSet CR is deployed:

    $ oc get openstackdataplanedeployment -n openstack
    NAME             	STATUS   MESSAGE
    openstack-data-plane   True     Setup Complete
    
    
    $ oc get openstackdataplanenodeset -n openstack
    NAME             	STATUS   MESSAGE
    openstack-data-plane   True     NodeSet Ready

    For information about the meaning of the returned status, see Data plane conditions and states in the Deploying Red Hat OpenStack Services on OpenShift guide.

    If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment in the Deploying Red Hat OpenStack Services on OpenShift guide.

  13. If you added a new node to the node set, then map the node to the Compute cell it is connected to:

    $ oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose

    If you did not create additional cells, this command maps the Compute nodes to cell1.

    Access the remote shell for the openstackclient pod and verify that the deployed Compute nodes are visible on the control plane:

    $ oc rsh -n openstack openstackclient
    $ openstack hypervisor list
Important

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

You can use Image Mode (bootc) to build, deploy, and manage data plane images as containers, as an alternative to RPM-based approaches.

Consider the following points when deciding to use Image Mode images for this purpose:

  • You must always reboot the node to use Image Mode images.
  • The edpm_update_services service is version dependent on openstack_selinux. If edpm_update_services requires a new version of openstack-selinux, build an updated Image Mode image and switch to the updated image.
  • You should perform system updates during planned maintenance windows because of the requirement to reboot a node for it to use an updated image.
  • Ensure the Image Mode image is accessible from all nodes before starting an update.
  • Nodes using an Image Mode image cannot update individual RPM packages. All updates must be included in an updated Image Mode image.
  • Image mode supports rollback to a previous image if an issue occurs with a new image.
  • After an updated node reboots, you should always verify the new image is active.

For information about updating data plane nodes using Image Mode images, see Updating Image Mode (bootc) data plane nodes in Updating your environment to the latest maintenance release.

4.9.1. Preparing the build host

Important

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

Prepare the build host to build the Image Mode (bootc) images and push the images to the container registry.

Prerequisites

  • The build host is using Red Hat Enterprise Linux (RHEL) 9.4 or later.
  • The build host has a minimum of 10GB of free disk space.
  • The user has sudo access for privileged container operations.
  • The user has container registry push permissions.

Procedure

  1. Log in to the build host.
  2. Install the packages necessary to perform build operations:

    $ sudo dnf install -y podman buildah osbuild-selinux
  3. Create a new file to store the environment variables for your build environment.
  4. Add the following environment variables to your file:

    # Container registry and image names
    export EDPM_BOOTC_REPO="<container_registry_url>"
    export EDPM_BOOTC_TAG="latest"
    export EDPM_BOOTC_IMAGE="${EDPM_BOOTC_REPO}:${EDPM_BOOTC_TAG}"
    export EDPM_QCOW2_IMAGE="${EDPM_BOOTC_REPO}:${EDPM_BOOTC_TAG}-qcow2"
    
    # Build configuration
    export EDPM_BASE_IMAGE="registry.redhat.io/rhel9/rhel-bootc:9.4"
    export EDPM_CONTAINERFILE="Containerfile"
    export RHSM_SCRIPT="<script_name>"
    export FIPS="<fips_setting>"
    export USER_PACKAGES="<additional_package_list>"
    • Replace <container_registry_url> with the URL of your container registry.
    • Replace <script_name> with the Red Hat Subscription Management (RHSM) script. Use rhsm.sh if you have a RHEL subscription and empty.sh if you do not have a subscription.
    • Replace <fips_setting> with the FIPS mode value. Use 1 to enable FIPS mode and 0 to disable FIPS mode.
    • Replace <additional_package_list> with a space separated list of additional packages to install. If there are no additional packages to install, use a set of empty quotes ("") for this value.
  5. Save the file with your build environment variables.
  6. Source the file with your build environment variables:

    $ source ~/<build_variable_file>
    • Replace <build_variable_file> with the name of the file you created to contain your environment files.
  7. Navigate to the EDPM image builder directory:

    $ cd edpm-image-builder
  8. Download the necessary edpm-image-builder files:

    $ mkdir -p bootc
    $ pushd bootc
    $ curl -O https://raw.githubusercontent.com/openstack-k8s-operators/edpm-image-builder/ee5219d7df4772586105649e03d1f545c9e4d653/bootc/Containerfile
    $ curl -O https://raw.githubusercontent.com/openstack-k8s-operators/edpm-image-builder/ee5219d7df4772586105649e03d1f545c9e4d653/bootc/rhsm.sh
    $ chmod +x rhsm.sh
    $ mkdir -p ansible-facts
    $ pushd ansible-facts
    $ curl -O https://raw.githubusercontent.com/openstack-k8s-operators/edpm-image-builder/ee5219d7df4772586105649e03d1f545c9e4d653/bootc/ansible-facts/bootc.fact
    $ popd
    $ popd
    $ curl -O https://raw.githubusercontent.com/openstack-k8s-operators/edpm-image-builder/ee5219d7df4772586105649e03d1f545c9e4d653/Containerfile.image
    $ curl -O https://raw.githubusercontent.com/openstack-k8s-operators/edpm-image-builder/ee5219d7df4772586105649e03d1f545c9e4d653/copy_out.sh
    $ chmod +x copy_out.sh
  9. Navigate to the bootc directory:

    $ cd bootc
  10. Create the output directory:

    $ mkdir -p output/yum.repos.d
  11. Perform the following steps to modify rhsm.sh for RHEL-based builds with Subscription Manager:

    1. Open rhsm.sh in a text editor.
    2. Edit the subscription variables:

      RHSM_USER=<rhsm_username>
      RHSM_PASSWORD=<rhsm_password>
      RHSM_POOL=<rhsm_pool_id>
      • Replace <rhsm_username> with your Subscription Manager username.
      • Replace <rhsm_password> with your Subscription Manager password.
      • Replace <rhsm_pool_id> with your Subscription Manager Pool ID if SCA is disabled.

        Note

        Additional edits of rhsm.sh might be necessary depending on your environment. For example, these can include running a Subscription Manager command with an activation key or using any custom scripts to enable needed repositories.

  12. Save and close the file.
  13. Set the RHSM_SCRIPT environment variable to your edited script file:

    $ export RHSM_SCRIPT="rhsm.sh"
Important

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

Build the Image Mode (bootc) image so it can be pushed to the container registry.

Prerequisites

Procedure

  1. Log in to the build host.
  2. Navigate to the bootc directory:

    $ cd bootc
  3. Log in to registry.redhat.io:

    $ sudo podman login registry.redhat.io
  4. Build the Image Mode container image:

    $ sudo buildah bud \
        --build-arg EDPM_BASE_IMAGE=${EDPM_BASE_IMAGE} \
        --build-arg RHSM_SCRIPT=${RHSM_SCRIPT} \
        --build-arg FIPS=${FIPS} \
        --build-arg USER_PACKAGES="${USER_PACKAGES}" \
        --volume /etc/pki/ca-trust:/etc/pki/ca-trust:ro,Z \
        --volume $(pwd)/output/yum.repos.d:/etc/yum.repos.d:rw,Z \
        -f ${EDPM_CONTAINERFILE} \
        -t ${EDPM_BOOTC_IMAGE} \
        .
  5. Verify the container image was built successfully:

    $ sudo podman images | grep ${EDPM_BOOTC_REPO}
  6. Push the container image to the container registry:

    $ sudo podman push ${EDPM_BOOTC_IMAGE}
Important

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

Build a QCOW2 Image Mode (bootc) image so it can be used for bare-metal deployments. This is an optional procedure that is not required for all deployments.

Prerequisites

Procedure

  1. Log in to the build host.
  2. Navigate to the bootc directory:

    $ cd bootc
  3. Set the BUILDER_IMAGE environment variable to use RHEL. The following is an example of performing this task:

    $ export BUILDER_IMAGE="registry.redhat.io/rhel9/bootc-image-builder:latest"
  4. Generate the QCOW2 disk image:

    $ sudo podman run \
        --rm \
        -it \
        --privileged \
        --security-opt label=type:unconfined_t \
        -v ./output:/output \
        -v /var/lib/containers/storage:/var/lib/containers/storage \
        ${BUILDER_IMAGE} \
        --type qcow2 \
        --local \
        ${EDPM_BOOTC_IMAGE}
  5. Move the QCOW2 image to the build directory and confirm the image checksum value:

    $ pushd output
    $ sudo mv qcow2/disk.qcow2 edpm-bootc.qcow2
    $ sudo sha256sum edpm-bootc.qcow2 > edpm-bootc.qcow2.sha256
    $ popd
  6. Prepare the packaging files:

    $ cp ../copy_out.sh output/
    $ cp ../Containerfile.image output/
  7. Set the BASE_IMAGE environment variable to use RHEL. The following is an example of performing this task:

    $ export BASE_IMAGE=registry.redhat.io/rhel9-4-els/rhel:9.4
  8. Build the QCOW2 container image:

    $ pushd output
    $ sudo buildah bud \
        --build-arg IMAGE_NAME=edpm-bootc \
        --build-arg BASE_IMAGE=${BASE_IMAGE} \
        -f ./Containerfile.image \
        -t ${EDPM_QCOW2_IMAGE} \
        .
    $ popd
  9. Push the QCOW2 container image to the container registry:

    $ sudo podman push ${EDPM_QCOW2_IMAGE}
Important

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

The following example configuration demonstrates how to use a QCOW2 container image in an OpenStackDataPlaneNodeSet CR:

apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
  name: edpm-compute-bootc
  namespace: openstack
spec:
  preProvisioned: False
  baremetalSetTemplate:
    osImage: <qcow2_image_filename>
    osContainerImageUrl: <qcow2_image_url>
  • Replace <qcow2_image_filename> with the filename of the QCOW2 image to be extracted from the container. This should match the filename inside your QCOW2 container image. For example, edpm-bootc.qcow2.
  • Replace <qcow2_image_url> with the full URL to the QCOW2 container image. This image has a -qcow2 suffix. For example, your-registry.example.com/edpm-bootc:latest-qcow2.

For more information about configuring and deploying an OpenStackDataPlaneNodeSet CR, see Creating the data plane in Deploying Red Hat OpenStack Services on OpenShift.

4.9.5. Image building customizations

Important

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

The image building process can be customized in many ways based on your environment and needs. The following customizations are available to modify the image building process:

Federal Information Processing Standards (FIPS) configuration

Disable FIPS mode by setting the FIPS environment variable to 0:

$ export FIPS="0"
Package customizaton

The container file defines several package environment variables that you can customize by modifying the build arguments. Key package categories include:

  • BOOTSTRAP_PACKAGES: Core system packages
  • OVS_PACKAGES: Open vSwitch packages
  • PODMAN_PACKAGES: Container runtime packages
  • LIBVIRT_PACKAGES: Virtualization packages
  • CEPH_PACKAGES: Ceph storage packages
  • USER_PACKAGES: User-defined packages for additional functionality

    The USER_PACKAGES environment variable allows you to inject additional packages into the Image Mode (bootc) image during build time. This is useful for adding site-specific tools, drivers, or utilities that are not included in the default package sets.

    Define the value of USER_PACKAGES before building the image to inject the additional packages into the completed image.

    The following example sets the value of USER_PACKAGES to add custom packages during the build and then executes the buildah command with the customizations included:

    $ export USER_PACKAGES="vim htop strace tcpdump"
    
    $ sudo buildah bud \
        --build-arg EDPM_BASE_IMAGE=${EDPM_BASE_IMAGE} \
        --build-arg RHSM_SCRIPT=${RHSM_SCRIPT} \
        --build-arg FIPS=${FIPS} \
        --build-arg USER_PACKAGES="${USER_PACKAGES}" \
        --volume /etc/pki/ca-trust:/etc/pki/ca-trust:ro,Z \
        --volume $(pwd)/output/yum.repos.d:/etc/yum.repos.d:rw,Z \
        -f ${EDPM_CONTAINERFILE} \
        -t ${EDPM_BOOTC_IMAGE} \
        .

    The following are some additional example values for USER_PACKAGES:

  • Add debugging and monitoring tools:

    $ export USER_PACKAGES="vim htop strace tcpdump iperf3 curl wget"
  • Add storage-related utilities:

    $ export USER_PACKAGES="lvm2 multipath-tools sg3_utils"
  • Add network debugging tools:

    export USER_PACKAGES="nmap netcat-openbsd traceroute mtr"
  • Add development tools:

    export USER_PACKAGES="git gcc make python3-pip"
    Note
    • Package names must be valid for the base image OS repository.
    • Packages must be available in the configured repositories.
    • Additional packages increase the image size.

4.9.6. Troubleshooting the image building process

Important

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

The following are errors you might encounter during the image building process and associated troubleshooting information:

Build fails with permission errors
If a build fails with permission errors, ensure you are using sudo with the buildah and podman commands for privileged operations.
Package installation errors

If a package installation fails, verify your repository configuration:

$ ls -la output/yum.repos.d/
Container registry authentication errors

If container registry authentication fails, ensure you are logged in to your container registry:

$ sudo podman login <registry_url>
  • Replace <registry_url> with the URL of your container registry.
Storage space errors

If you receive errors about storage space, delete any unused images:

$ sudo podman system prune -a
Note

Image mode (bootc) images can take a larger amount of storage space than other images. You should proactively ensure storage space is not occupied by unused image files before errors occur.

Important

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

By default, the OpenStackBaremetalSet CR automatically creates an OpenStackProvisionServer CR for each node set that serves the bundled operating system image for node provisioning. You can create a custom OpenStackProvisionServer CR that serves a different operating system image.

Prerequisites

  • A QCOW2 container image that includes all the packages required for data plane deployment, such as the following:

    • The QCOW2 disk image and checksum files.
    • An endpoint script, such as copy_out.sh, that copies the QCOW2 container image to the directory specified in the DEST_DIR environment variable.

Procedure

  1. Generate a checksum file for the container image appropriate for your environment:

    • Generate a SHA256 checksum file:

      $ sha256sum <image_file_name>.qcow2 > <image_file_name>.qcow2.sha256sum
    • Generate a MD5 checksum file:

      $ md5sum <image_file_name>.qcow2 > <image_file_name>.qcow2.md5sum
    • Generate a SHA512 checksum file:

      $ sha512sum <image_file_name>.qcow2 > <image_file_name>.qcow2.sha512sum
  2. Clone the edpm-image-builder repository:

    $ git clone https://github.com/openstack-k8s-operators/edpm-image-builder.git
  3. Navigate to the edpm-image-builder directory:

    $ cd edpm-image-builder
  4. Create the container image file in the edpm-image-builder directory:

    $ FROM registry.access.redhat.com/ubi9/ubi-minimal:9.6
    $ COPY <image_file_name>.qcow2 /
    $ COPY <image_file_name>.qcow2.sha256sum /
    $ COPY copy_out.sh /copy_out.sh
    $ RUN chmod +x /copy_out.sh
    $ ENTRYPOINT ["/copy_out.sh"]

    where:

    <image_file_name>

    Specifies the name of your QCOW2 container image.

    Note
    • The files are copied to your root directory because the default SRC_DIR for copy_out.sh is /. You can set ENV SRC_DIR=<path> in your container file if you want to use a different source directory.
    • You can use compressed (.qcow2.gz) or uncompressed (.qcow2) images with the copy_out.sh script.
  5. Build and push the container image:

    $ buildah bud -f Containerfile -t <your_registry>/my-custom-os-image:latest
    $ buildah push <your_registry>/my-custom-os-image:latest

    where:

    <your_registry>
    Specifies the URL to your container registry.
  6. Create an OpenStackProvisionServer CR that defines your custom provisioning server:

    apiVersion: baremetal.openstack.org/v1beta1
    kind: OpenStackProvisionServer
    metadata:
      name: openstackprovisionserver
    spec:
      interface: <connection_interface>
      port: <connection_port>
      osImage: <image_file_name>.qcow2
      osContainerImageUrl: <your_registry>/my-custom-os-image:latest
      apacheImageUrl: registry.redhat.io/ubi9/httpd-24:latest
      agentImageUrl: quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest

    where:

    <connection_interface>
    Specifies the interface to use to connect to the custom provisioning server.
    <connection_port>
    Specifies the port to use to connect to the custom provisioning server.
    <image_file_name>
    Specifies the name of your QCOW2 container image.
    <your_registry>
    Specifies the URL to your container registry.
  7. Save your custom OpenStackProvisionServer CR file.
  8. Apply the OpenStackProvisionServer CR configuration:

    $ oc apply -f <provision_server_cr_file>

    where:

    <provision_server_cr_file>
    Specifies the file name of the new OpenStackProvisionServer CR file that you created.
  9. Add your custom OpenStackProvisionServer CR to the applicable OpenStackDataPlaneNodeSet CRs. The following is an example of an OpenStackDataPlaneNodeSet CR that uses a custom OpenStackProvisionServer CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    metadata:
      name: example-nodeset
    spec:
      baremetalSetTemplate:
        provisionServerName: <provision_server_name>
        osImage: <image_file_name>.qcow2
        deploymentSSHSecret: <custom_ssh_secret>
        ctlplaneInterface: <ctrl_plane_interface>
      nodes:
        edpm-compute-0:
          hostName: edpm-compute-0

    where:

    <provision_server_name>
    Specifies the name of your custom OpenStackProvisionServer CR.
    <image_file_name>
    Specifies the name of your QCOW2 container image.
    <custom_ssh_secret>
    Specifies the SSH secret to use for deployment.
    <ctrl_plane_interface>
    Specifies the control plane interface to use for provisioning.
  10. Apply the updated OpenStackDataPlaneNodeSet CR configuration:

    $ oc apply -f openstack_data_plane.yaml
  11. Verify that the data plane resource has been updated by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m

    When the status is SetupReady the command returns a condition met message, otherwise it returns a timeout error.

    For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top