Chapter 3. Customizing the control plane


The Red Hat OpenStack Services on OpenShift (RHOSO) control plane contains the RHOSO services that manage the cloud. The RHOSO services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You can customize your deployed control plane with the services required for your environment.

3.1. Prerequisites

  • The RHOSO environment is deployed on a RHOCP cluster. For more information, see Deploying Red Hat OpenStack Services on OpenShift.
  • You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges.

3.2. Enabling disabled services

If you enable a service that is disabled by setting enabled: true, you must either create an empty template for the service by adding template: {} to the service definition to ensure that the default values for the service are set, or specify some or all of the template parameter values. For example, to enable the Dashboard service (horizon) with the default service values, add the following configuration to your OpenStackControlPlane custom resource (CR):

spec:
  ...
  horizon:
    apiOverride: {}
    enabled: true
    template: {}
Copy to Clipboard Toggle word wrap

If you want to set the values for specific service parameters, then add the following configuration to your OpenStackControlPlane custom resource (CR):

spec:
  ...
  horizon:
    apiOverride: {}
    enabled: true
    template:
      customServiceConfig: ""
      memcachedInstance: memcached
      override: {}
      preserveJobs: false
      replicas: 2
      resources: {}
      secret: osp-secret
      tls: {}
Copy to Clipboard Toggle word wrap

Any parameters that you do not specify are set to the default value from the service template.

By default, the OpenStack Operator deploys Red Hat OpenStack Services on OpenShift (RHOSO) services on any worker node. You can control the placement of each RHOSO service pod by creating Topology custom resources (CRs). You can apply a Topology CR at the top level of the OpenStackControlPlane CR to specify the default pod spread policy for the control plane. You can also override the default spread policy in the specification of each service in the OpenStackControlPlane CR.

Procedure

  1. Create a file on your workstation that defines a Topology CR that spreads the service pods across worker nodes, for example, default_ctlplane_topology.yaml:

    apiVersion: topology.openstack.org/v1beta1
    kind: Topology
    metadata:
      name: default-ctlplane-topology
      namespace: openstack
    spec:
      topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: kubernetes.io/hostname
        whenUnsatisfiable: ScheduleAnyway
        matchLabelKeys:
          - pod-template-hash
          - controller-revision-hash
    Copy to Clipboard Toggle word wrap
    • metadata.name: The name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character.
    • topologySpreadConstraints.whenUnsatisfiable: Specifies how the scheduler handles a pod if it does not satisfy the spread constraint:

      • DoNotSchedule: Instructs the scheduler not to schedule the pod. This is the default behavior. To ensure the deployment has high availability (HA), set the HA services rabbitmq and galera to DoNotSchedule.
      • ScheduleAnyway: Instructs the scheduler to schedule the pod in any location, but to give higher precedence to topologies that minimize the skew. If you set HA services to ScheduleAnyway, then when the spread constraint cannot be satisfied, the pod is placed on a different host worker node. You must then move the pod manually to the correct host once the host is operational. For more information about how to manually move pods, see Controlling pod placement onto nodes (scheduling) in RHOCP Nodes.
    • topologySpreadConstraints.matchLabelKeys: An optional field that specifies the label keys to use to group the pods that the affinity rules are applied to. Use this field to ensure that the affinity rules are applied only to pods from the same statefulset or deployment resource when scheduling. The matchLabelKeys field enables the resource to be updated with new pods and the spread constraint rules to be applied to only the new set of pods.
  2. Create a file on your workstation that defines a Topology CR that enforces strict spread constraints for HA service pods, for example, ha_ctlplane_topology.yaml:

    apiVersion: topology.openstack.org/v1beta1
    kind: Topology
    metadata:
      name: ha-ctlplane-topology
      namespace: openstack
    spec:
      topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: kubernetes.io/hostname
        whenUnsatisfiable: DoNotSchedule
        matchLabelKeys:
          - pod-template-hash
          - controller-revision-hash
    Copy to Clipboard Toggle word wrap
  3. Create the Topology CRs:

    $ oc create -f default_ctlplane_topology.yaml
    $ oc create -f ha_ctlplane_topology.yaml
    Copy to Clipboard Toggle word wrap
  4. Open your OpenStackControlPlane CR file on your workstation.
  5. Specify that the service pods, when created, are spread across the worker nodes in your control plane:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack-control-plane
      namespace: openstack
    spec:
      topologyRef:
        name: default-ctlplane-topology
    Copy to Clipboard Toggle word wrap
  6. Update the specifications for the rabbitmq and galera services to ensure that the HA service pods, when created, are only placed on a worker node when the spread constraint can be satisfied:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack-control-plane
      namespace: openstack
    spec:
      topologyRef:
        name: default-ctlplane-topology
      ...
      galera:
        topologyRef:
          name: ha-ctlplane-topology
        ...
      rabbitmq:
        topologyRef:
          name: ha-ctlplane-topology
        ...
    Copy to Clipboard Toggle word wrap
  7. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  8. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    NAME 						STATUS 	MESSAGE
    openstack-control-plane 	Unknown 	Setup started
    Copy to Clipboard Toggle word wrap

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

  9. Verify that the service pods are running on the correct worker nodes.

    Example

    $ oc -n openstack get pods | grep -iE "(rabbitmq|galera)"
    openstack-galera-0           1/1     Running     0             24m     192.172.28.33   worker-0
    openstack-galera-1           1/1     Running     0             24m     192.172.16.63   worker-1
    openstack-galera-2           1/1     Running     0             24m     192.172.12.82   worker-2
    rabbitmq-server-0            1/1     Running     0             24m     192.168.24.95   worker-2
    rabbitmq-server-1            1/1     Running     0             24m     192.168.16.84   worker-0
    rabbitmq-server-2            1/1     Running     0             24m     192.168.20.137  worker-1
    Copy to Clipboard Toggle word wrap

3.4. Adding Compute cells to the control plane

You can use cells to divide Compute nodes in large deployments into groups. Each cell has a dedicated message queue, runs standalone copies of the cell-specific Compute services and databases, and stores instance metadata in a database dedicated to instances in that cell.

By default, the control plane creates two cells:

  • cell0: The controller cell that manages global components and services, such as the Compute scheduler and the global conductor. This cell also contains a dedicated database to store information about instances that failed to be scheduled to a Compute node. You cannot connect Compute nodes to this cell.
  • cell1: The default cell that Compute nodes are connected to when you don’t create and configure additional cells.

You can add cells to your Red Hat OpenStack Services on OpenShift (RHOSO) environment when you create your control plane or at any time afterwards. The following procedure adds one additional cell, cell2, and configures each cell with a dedicated nova metadata API service. Creating a dedicated nova metadata API service for each cell improves the performance of large deployments and the scalability of your environment. Alternatively, you can deploy one nova metadata API service on the top level that serves all the cells.

Procedure

  1. Open your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml, on your workstation.
  2. Create a database server for each new cell that you want to add to your RHOSO environment:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack-control-plane
      namespace: openstack
    spec:
      secret: osp-secret
      ...
      galera:
        enabled: true
        templates:
          openstack:
            storageRequest: 5G
            secret: cell0-secret
            replicas: 1
          openstack-cell1:
            storageRequest: 5G
            secret: cell1-secret
            replicas: 1
          openstack-cell2:
            storageRequest: 5G
            secret: cell2-secret
            replicas: 1
    Copy to Clipboard Toggle word wrap
    • templates.openstack: The database used by most of the RHOSO services, including the Compute services nova-api and nova-scheduler, and cell0.
    • templates.openstack-cell1: The database to be used by cell1.
    • templates.openstack-cell2: The database to be used by cell2.
  3. Create a message bus with unique IPs for the load balancer for each new cell that you want to add to your RHOSO environment:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack-control-plane
    spec:
      secret: osp-secret
      ...
      rabbitmq:
        templates:
          rabbitmq:
            override:
              service:
                metadata:
                  annotations:
                    metallb.universe.tf/address-pool: internalapi
                    metallb.universe.tf/loadBalancerIPs: 172.17.0.85
                spec:
                  type: LoadBalancer
          rabbitmq-cell1:
            override:
              service:
                metadata:
                  annotations:
                    metallb.universe.tf/address-pool: internalapi
                    metallb.universe.tf/loadBalancerIPs: 172.17.0.86
                spec:
                  type: LoadBalancer
           rabbitmq-cell2:
           override:
             service:
               metadata:
                 annotations:
                   metallb.universe.tf/address-pool: internalapi
                   metallb.universe.tf/loadBalancerIPs: 172.17.0.87
               spec:
                 type: LoadBalancer
    Copy to Clipboard Toggle word wrap
    • rabbitmq.rabbitmq: The message bus used by most of the RHOSO services, including the Compute services nova-api and nova-scheduler, and cell0.
    • rabbitmq.rabbitmq-cell1: The message bus to be used by cell1.
    • rabbitmq.rabbitmq-cell2: The message bus to be used by cell2.
  4. Optional: Override the default VNC proxy service route hostname with your custom API public endpoint:

      nova:
        apiOverride:
          route: {}
        cellOverride:
          cell1:
            noVNCProxy:
              route:
                spec:
                  host: myvncproxy.domain.name
    Copy to Clipboard Toggle word wrap
    Note

    The hostname must be resolved by the DNS service in your data center to which the RHOCP cluster and the DNS instance forwards their requests. You cannot use the internal RHOCP coredns.

  5. Add the new cells to the cellTemplates configuration in the nova service configuration:

      nova:
        ...
        template:
          ...
          metadataServiceTemplate:
            enabled: false
          secret: osp-secret
          apiDatabaseAccount: nova-api
          cellTemplates:
            cell0:
              hasAPIAccess: true
              cellDatabaseAccount: nova-cell0
              cellDatabaseInstance: openstack
              cellMessageBusInstance: rabbitmq
              conductorServiceTemplate:
                replicas: 1
            cell1:
              hasAPIAccess: true
              cellDatabaseAccount: nova-cell1
              cellDatabaseInstance: openstack-cell1
              cellMessageBusInstance: rabbitmq-cell1
              conductorServiceTemplate:
                replicas: 1
              metadataServiceTemplate:
                enabled: true
                replicas: 1
            cell2:
              hasAPIAccess: true
              cellDatabaseAccount: nova-cell2
              cellDatabaseInstance: openstack-cell2
              cellMessageBusInstance: rabbitmq-cell2
              conductorServiceTemplate:
                replicas: 1
              metadataServiceTemplate:
                enabled: true
                replicas: 1
    Copy to Clipboard Toggle word wrap
    • template.metadataServiceTemplate.enabled: Disables the single nova metadata API service that serves all the cells. If you want to have just one nova metadata API service that serves all the cells, then set this field to true and remove configuration for the metadata service from each cell.
    • template.cellTemplates.cell2: The name of the new Compute cell. The name has a limit of 20 characters, and must contain only lowercase alphanumeric characters and the - symbol. For more information about the properties you can configure for a cell, view the definition for the Nova CRD:

      $ oc describe crd nova
      Copy to Clipboard Toggle word wrap
  6. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  7. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    NAME 						STATUS 	MESSAGE
    openstack-control-plane 	Unknown 	Setup started
    Copy to Clipboard Toggle word wrap

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

  8. Optional: Confirm that the control plane is deployed by reviewing the pods in the openstack namespace for each of the cells you created:

    $ oc get pods -n openstack | grep cell2
    nova-cell2-conductor-0             1/1     Running     2               5d20h
    nova-cell2-novncproxy-0            1/1     Running     2               5d20h
    openstack-cell2-galera-0           1/1     Running     2               5d20h
    rabbitmq-cell2-server-0            1/1     Running     2               5d20h
    Copy to Clipboard Toggle word wrap

    The control plane is deployed when all the pods are either completed or running.

  9. Optional: Confirm that the new cells are created:

    $ oc exec -it nova-cell0-conductor-0 /bin/bash
    # nova-manage cell_v2 list_cells
    +-------+--------------------------------------+----------------------------------------------------------------------------------+------------------------------------------------------------+----------+| Name | UUID | Transport URL | Database Connection | Disabled |+-------+--------------------------------------+----------------------------------------------------------------------------------+------------------------------------------------------------+----------+| cell0 | 00000000-0000-0000-0000-000000000000 | rabbit: | mysql+pymysql://nova_cell0:****@openstack/nova_cell0 | False || cell1 | c5bf5e35-6677-40aa-80d0-33a440cac14e | rabbit://default_user_CuUVnXz-PvgzXvPxypU:****@rabbitmq-cell1.openstack.svc:5672 | mysql+pymysql://nova_cell1:****@openstack-cell1/nova_cell1 | False || cell2 | c5bf5e35-6677-40aa-80d0-33a440cac14e | rabbit://default_user_CuUVnXz-PvgzXvPxypU:****@rabbitmq-cell2.openstack.svc:5672 | mysql+pymysql://nova_cell2:****@openstack-cell2/nova_cell2| False |+-------+--------------------------------------+----------------------------------------------------------------------------------+------------------------------------------------------------+----------+
    Copy to Clipboard Toggle word wrap

You can remove a cell from the control plane to release control plane resources. To remove a cell, you delete the references to the cell in your OpenStackControlPlane custom resource (CR) and then delete the related secrets and CRs.

Note

You cannot remove cell0 from the control plane.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, on your workstation.
  2. Remove the cell definition from the cellTemplates. For example:

    spec:
    ...
      cellTemplates:
        cell0:
          cellDatabaseAccount: nova-cell0
          hasAPIAccess: true
        ...
        <cellname>:
          ...
    Copy to Clipboard Toggle word wrap
    • Replace <cellname> with the name of the cell you are removing and delete the line.
  3. Delete the cell-specific RabbitMQ definition from the OpenStackControlPlane CR. For example:

    spec:
    ...
      rabbitmq:
        templates:
          ...
          rabbitmq-<cellname>:
            ...
    Copy to Clipboard Toggle word wrap
  4. Delete the cell-specific Galera definition from the OpenStackControlPlane CR. For example:

    spec:
    ...
      galera:
        templates:
          ...
          openstack-<cellname>:
            ...
    Copy to Clipboard Toggle word wrap
  5. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  6. Wait until Red Hat OpenShift Cluster Platform (RHOCP) creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard Toggle word wrap

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

  7. Optional: Confirm that the control plane is deployed by reviewing the pods in the openstack namespace for each of the cells you created. The control plane is deployed when all the pods are either completed or running. Ensure that the cell you deleted is not present in the output.

Verification

  1. Open a remote shell connection to the OpenStackClient pod:

    $ oc rsh -n openstack openstackclient
    Copy to Clipboard Toggle word wrap
  2. Confirm that the internal service endpoints are registered with each service:

    $ openstack compute service list
    --------------------+----------+---------+-------+-----------------------------------------------------------------------------------------+
    | ID                                   | Binary         | Host                   | Zone     | Status  | State | Updated At                 |
    +--------------------------------------+----------------+------------------------+----------+---------+-------+----------------------------+
    | 792258c6-fc84-4f6c-8d8c-48c1c4873786 | nova-conductor | nova-cell0-conductor-0 | internal | enabled | up    | 2025-02-10T11:04:34.000000 |
    | b072bd47-38f9-40c9-8be8-f1dbd0b602f6 | nova-scheduler | nova-scheduler-0       | internal | enabled | up    | 2025-02-10T11:04:27.000000 |
    | 10f36138-90da-4ef3-8c1f-a9dfd0c4ca0c | nova-conductor | nova-cell1-conductor-0 | internal | enabled | up    | 2025-02-10T11:04:28.000000 |
    +--------------------------------------+----------------+------------------------+----------+---------+-------+----------------------------+
    Copy to Clipboard Toggle word wrap
  3. Exit the OpenStackClient pod:

    $ exit
    Copy to Clipboard Toggle word wrap

3.6. Configuring Compute notifications

You can configure the Compute service (nova) to provide notifications to Telemetry services in your Red Hat OpenStack Services on OpenShift (RHOSO) environment. The Compute service supports designating one RabbitMQ replica as a notification server.

Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.

Procedure

  1. Open your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml, on your workstation.
  2. Update the rabbitmq service configuration to provide Compute service notifications.

    The following example creates a single notifications server:

    spec:
        rabbitmq:
            enabled: true
            templates:
            ...
                <rabbitmq_notification_server>:
                    delayStartSeconds: 30
                    override:
                        service:
                            metadata:
                                annotations:
                                    metallb.universe.tf/address-pool: internalapi
                                    metallb.universe.tf/loadBalancerIPs: <ip_address>
                            spec:
                                type: LoadBalancer
    Copy to Clipboard Toggle word wrap
    • Replace <rabbitmq_notification_server> with the name of your notification server, for example, rabbitmq-notifications.
    • Replace <ip_address> with the appropriate IP address. Adjust this value accordingly based on your networking plan and configuration.
  3. Register the notification server with the Compute service:

    spec:
         nova:
    	  template:
                   ...
                      apiMessageBusInstance: rabbitmq
                      notificationsBusInstance: <rabbitmq_notification_server>
    Copy to Clipboard Toggle word wrap
    • Replace <rabbitmq_notification_server> with the name of the notification server created in the previous step.
  4. Save openstack_control_plane.yaml.
  5. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  6. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard Toggle word wrap

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

  7. Optional: Confirm that the control plane is deployed by reviewing the pods in the openstack namespace for each of the cells you created. The control plane is deployed when all the pods are either completed or running.
  8. Optional: Verify the notification transporturl is properly configured.

    The following is an example of performing this verification:

    $ oc get transporturl
    NAME                          STATUS   MESSAGE
    ...
    nova-api-transport            True     Setup complete
    nova-cell1-transport          True     Setup complete
    nova-notification-transport   True     Setup complete
    Copy to Clipboard Toggle word wrap
  9. Create a new OpenStackDataPlaneDeployment CR to configure notifications on the data plane nodes and deploy the data plane. Save the CR to a file named compute_notifications_deploy.yaml on your workstation:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: nova-notifications
      namespace: openstack
    spec:
       nodeSets:
       - openstack-edpm
       - ...
       - <nodeSet_name>
    Copy to Clipboard Toggle word wrap
    • Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment.

      Note

      The following example demonstrates how to obtain information about your data plane deployment and use that information to obtain a list of nodesets.

      $ oc get openstackdataplanedeployment
      NAME              NODESETS             STATUS   MESSAGE
      edpm-deployment   ["openstack-edpm"]   True     Setup complete
      
      $ oc get openstackdataplanedeployment edpm-deployment -o jsonpath='{.spec.nodeSets}'["openstack-edpm"]
      Copy to Clipboard Toggle word wrap
  10. Save the compute_notifications_deploy.yaml deployment file.
  11. Deploy the data plane updates:

    $ oc create -f compute_notifications_deploy.yaml
    Copy to Clipboard Toggle word wrap
  12. Verify that the data plane is deployed:

    $ oc get openstackdataplanenodeset
    
    NAME                STATUS MESSAGE
    nova-notifications  True   Deployed
    Copy to Clipboard Toggle word wrap
  13. Access the remote shell for openstackclient and verify that the deployed Compute nodes are visible on the control plane:

    $ oc rsh -n openstack openstackclient
    
    $ openstack hypervisor list
    Copy to Clipboard Toggle word wrap

You can enable the Dashboard service (horizon) interface for cloud user access to the cloud through a browser.

Procedure

  1. Open your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml, on your workstation.
  2. Enable the horizon service:

    spec:
      ...
      horizon:
        enabled: true
    Copy to Clipboard Toggle word wrap
  3. Optional: Override the default route hostname for the horizon service with your custom API public endpoint:

    spec:
      ...
      horizon:
        enabled: true
        apiOverride:
          route:
            spec:
              host: myhorizon.domain.name
    Copy to Clipboard Toggle word wrap
    Note

    The hostname must be resolved by the DNS service in your data center to which the RHOCP cluster and the DNS instance forwards their requests. You cannot use the internal RHOCP coredns.

  4. Configure the horizon service:

    spec:
      ...
      horizon:
        ...
        template:
          customServiceConfig: ""
          memcachedInstance: memcached
          override: {}
          preserveJobs: false
          replicas: 2
          resources: {}
          secret: osp-secret
          tls: {}
    Copy to Clipboard Toggle word wrap
    • horizon.template.replicas: Set replicas to a minimum of 2 for high availability.
  5. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  6. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    NAME 						STATUS 	MESSAGE
    openstack-control-plane 	Unknown 	Setup started
    Copy to Clipboard Toggle word wrap

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

  7. Confirm that the control plane is deployed by reviewing the pods in the openstack namespace:

    $ oc get pods -n openstack
    Copy to Clipboard Toggle word wrap

    The control plane is deployed when all the pods are either completed or running.

  8. Retrieve the Dashboard service endpoint URL:

    $ oc get horizons horizon -o jsonpath='{.status.endpoint}'
    Copy to Clipboard Toggle word wrap

    Use this URL to access the Horizon interface.

Verification

  1. To log in as the admin user, obtain the admin password from the AdminPassword parameter in the osp-secret secret:

    $ oc get secret osp-secret -o jsonpath='{.data.AdminPassword}' | base64 -d
    Copy to Clipboard Toggle word wrap
  2. Open a browser.
  3. Enter the Dashboard endpoint URL.
  4. Log in to the Dashboard with your username and password.

3.8. Enabling the Orchestration service (heat)

You can enable the Orchestration service (heat) in your Red Hat OpenStack Services on OpenShift (RHOSO) environment. Cloud users can use the Orchestration service to create and manage cloud resources such as storage, networking, instances, or applications.

Procedure

  1. Open your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml, on your workstation.
  2. Enable and configure the heat service:

    spec:
      ...
      heat:
        apiOverride:
          route: {}
        cnfAPIOverride:
          route: {}
        enabled: true
        template:
          databaseAccount: heat
          databaseInstance: openstack
          heatAPI:
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
            replicas: 1
            resources: {}
            tls:
              api:
                internal: {}
                public: {}
          heatCfnAPI:
            override: {}
            replicas: 1
            resources: {}
            tls:
              api:
                internal: {}
                public: {}
          heatEngine:
            replicas: 1
            resources: {}
          memcachedInstance: memcached
          passwordSelectors:
            authEncryptionKey: HeatAuthEncryptionKey
            service: HeatPassword
          preserveJobs: false
          rabbitMqClusterName: rabbitmq
          secret: osp-secret
          serviceUser: heat
    Copy to Clipboard Toggle word wrap
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  4. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    NAME 						STATUS 	MESSAGE
    openstack-control-plane 	Unknown 	Setup started
    Copy to Clipboard Toggle word wrap

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

  5. Confirm that the control plane is deployed by reviewing the pods in the openstack namespace:

    $ oc get pods -n openstack
    Copy to Clipboard Toggle word wrap

    The control plane is deployed when all the pods are either completed or running.

Verification

  1. Open a remote shell connection to the OpenStackClient pod:

    $ oc rsh -n openstack openstackclient
    Copy to Clipboard Toggle word wrap
  2. Confirm that the internal service endpoints are registered with each service:

    $ openstack endpoint list -c 'Service Name' -c Interface -c URL --service heat
    +--------------+-----------+---------------------------------------------------------------+
    | Service Name | Interface | URL                                                           |
    +--------------+-----------+---------------------------------------------------------------+
    | heat       | internal  | http://heat-internal.openstack.svc:9292                     |
    | heat       | public    | http://heat-public-openstack.apps.ostest.test.metalkube.org |
    +--------------+-----------+---------------------------------------------------------------+
    Copy to Clipboard Toggle word wrap
  3. Exit the openstackclient pod:

    $ exit
    Copy to Clipboard Toggle word wrap

You can change the default OpenStackClient API versions for a Red Hat OpenStack on OpenShift (RHOSO) service by customizing the environment variables for the OpenStackClient pod.

Procedure

  1. Open your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml, on your workstation.
  2. Add the openstackclient specification and define the name-value pairs for each environment variable you want to customize. Specify the environment variable by using the format OS_<SERVICE>_API_VERSION. The following example customizes the environment variables for the Identity (keystone) and Compute (nova) services:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack-control-plane
      namespace: openstack
    spec:
      ...
      openstackclient:
        template:
          env:
          - name: OS_IDENTITY_API_VERSION
            value: "3"
          - name: OS_COMPUTE_API_VERSION
            value: "2.95"
    Copy to Clipboard Toggle word wrap
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  4. Wait until the Red Hat OpenShift Container Platform (RHOCP) creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    NAME 						STATUS 	MESSAGE
    openstack-control-plane 	Unknown 	Setup started
    Copy to Clipboard Toggle word wrap

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

  5. Confirm that the control plane is deployed by reviewing the pods in the openstack namespace:

    $ oc get pods -n openstack
    Copy to Clipboard Toggle word wrap

    The control plane is deployed when all the pods are either completed or running.

Verification

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
    Copy to Clipboard Toggle word wrap
  2. Verify that your custom environment variables are set:

    $ env |grep API_VERSION
    OS_COMPUTE_API_VERSION=2.95
    OS_IDENTITY_API_VERSION=3
    Copy to Clipboard Toggle word wrap
  3. Exit the OpenStackClient pod:

    $ exit
    Copy to Clipboard Toggle word wrap

3.10. Configuring DNS endpoints

You can change the default DNS hostname of any Red Hat OpenStack Services on OpenShift (RHOSO) service that is exposed by a route and that supports the apiOverride field. You change the default DNS hostname of the service by using the apiOverride field to customize the hostname that is set for a route.

Procedure

  1. Open your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml, on your workstation.
  2. Update the apiOverride field for the service to override the default route hostname with your custom API public endpoint:

    spec:
      ...
      cinder:
        enabled: true
        apiOverride:
          route:
            spec:
              host: mycinder.domain.name
    Copy to Clipboard Toggle word wrap
    Note

    The hostname must be resolved by the DNS service in your data center to which the RHOCP cluster and the DNS instance forwards their requests. You cannot use the internal RHOCP coredns.

  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  4. Wait until the Red Hat OpenShift Container Platform (RHOCP) creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    NAME 						STATUS 	MESSAGE
    openstack-control-plane 	Unknown 	Setup started
    Copy to Clipboard Toggle word wrap

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

  5. Confirm that the control plane is deployed by reviewing the pods in the openstack namespace:

    $ oc get pods -n openstack
    Copy to Clipboard Toggle word wrap

    The control plane is deployed when all the pods are either completed or running.

Verification

  1. Confirm that the route is created:

    $ oc get route -n openstack cinder
    NAME      HOST/PORT               PATH   SERVICES   PORT      TERMINATION          WILDCARD
    cinder    mycinder.domain.name           cinder     cinder    reencrypt/Redirect   None
    Copy to Clipboard Toggle word wrap
  2. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
    Copy to Clipboard Toggle word wrap
  3. Verify that the endpoint is updated:

    $ openstack endpoint list --service cinderv3 --interface public
    +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
    | ID                               | Region    | Service Name | Service Type | Enabled | Interface | URL                             |
    +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
    | 5bc4760fa4944a14b1c052cc067b952c | regionOne | cinderv3     | volumev3     | True    | public    | https://mycinder.domain.name/v3 |
    +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
    Copy to Clipboard Toggle word wrap
  4. Exit the OpenStackClient pod:

    $ exit
    Copy to Clipboard Toggle word wrap

3.11. Additional resources

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat