Chapter 6. Creating the control plane


The Red Hat OpenStack Services on OpenShift (RHOSO) control plane contains the RHOSO services that manage the cloud. The RHOSO services run as a Red Hat OpenShift Container Platform (RHOCP) workload.

Note

Creating the control plane also creates an OpenStackClient pod that you can access through a remote shell (rsh) to run OpenStack CLI commands.

6.1. Prerequisites

  • The OpenStack Operator (openstack-operator) is installed. For more information, see Installing and preparing the Operators.
  • The RHOCP cluster is prepared for RHOSO networks. For more information, see Preparing RHOCP for BGP networks.
  • The RHOCP cluster is not configured with any network policies that prevent communication between the openstack-operators namespace and the control plane namespace (default openstack). Use the following command to check the existing network policies on the cluster:

    $ oc get networkpolicy -n openstack
  • You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges.

6.2. Creating the control plane

Define an OpenStackControlPlane custom resource (CR) to perform the following tasks:

  • Create the control plane.
  • Enable the Red Hat OpenStack Services on OpenShift (RHOSO) services.

The following procedure creates an initial control plane with the recommended configurations for each service. The procedure helps you quickly create an operating control plane environment that you can use to troubleshoot issues and test the environment before adding all the customizations you require. You can add service customizations to a deployed environment. For more information about how to customize your control plane after deployment, see Customizing the Red Hat OpenStack Services on OpenShift deployment.

For an example OpenStackControlPlane CR, see Example OpenStackControlPlane CR.

Tip

Use the following commands to view the OpenStackControlPlane CRD definition and specification schema:

$ oc describe crd openstackcontrolplane

$ oc explain openstackcontrolplane.spec

Procedure

  1. Create a file on your workstation named openstack_control_plane.yaml to define the OpenStackControlPlane CR:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack-control-plane
      namespace: openstack
  2. Specify the Secret CR you created to provide secure access to the RHOSO service pods in Providing secure access to the Red Hat OpenStack Services on OpenShift services:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack-control-plane
      namespace: openstack
    spec:
      secret: osp-secret
  3. Specify the storageClass you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end:

    spec:
      secret: osp-secret
      storageClass: <RHOCP_storage_class>
    • Replace <RHOCP_storage_class> with the storage class you created for your RHOCP cluster storage back end. For information about storage classes, see Creating a storage class.
  4. Add the following service configurations:

    Note

    The following service snippets use IP addresses from the default RHOSO MetalLB IPAddressPool range for the loadBalancerIPs field. Update the loadBalancerIPs field with the IP address from the MetalLB IPAddressPool range that you created.

    Block Storage service (cinder)
      cinder:
        apiOverride:
          route: {}
        template:
          databaseInstance: openstack
          secret: osp-secret
          cinderAPI:
            replicas: 3
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
          cinderScheduler:
            replicas: 1
          cinderBackup:
            networkAttachments:
            - storage
            replicas: 0
          cinderVolumes:
            volume1:
              networkAttachments:
              - storage
              replicas: 0
    Compute service (nova)
      nova:
        apiOverride:
          route: {}
        template:
          apiServiceTemplate:
            replicas: 3
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
          metadataServiceTemplate:
            replicas: 3
            override:
              service:
                metadata:
                  annotations:
                    metallb.universe.tf/address-pool: internalapi
                    metallb.universe.tf/allow-shared-ip: internalapi
                    metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                spec:
                  type: LoadBalancer
          schedulerServiceTemplate:
            replicas: 3
          cellTemplates:
            cell0:
              cellDatabaseAccount: nova-cell0
              cellDatabaseInstance: openstack
              cellMessageBusInstance: rabbitmq
              hasAPIAccess: true
            cell1:
              cellDatabaseAccount: nova-cell1
              cellDatabaseInstance: openstack-cell1
              cellMessageBusInstance: rabbitmq-cell1
              noVNCProxyServiceTemplate:
                enabled: true
              hasAPIAccess: true
          secret: osp-secret
    Note

    A full set of Compute services (nova) are deployed by default for each of the default cells, cell0 and cell1: nova-api, nova-metadata, nova-scheduler, and nova-conductor. The novncproxy service is also enabled for cell1 by default.

    DNS service for the data plane
      dns:
        template:
          options:
          - key: server
            values:
            - <IP address for DNS server reachable from dnsmasq pod>
          override:
            service:
              metadata:
                annotations:
                  metallb.universe.tf/address-pool: ctlplane
                  metallb.universe.tf/allow-shared-ip: ctlplane
                  metallb.universe.tf/loadBalancerIPs: 192.168.122.80
              spec:
                type: LoadBalancer
          replicas: 2
    • options: Defines the dnsmasq instances required for each DNS server by using key-value pairs. In this example, there is one key-value pair defined because there is only one DNS server configured to forward requests to.
    • key: Specifies the dnsmasq parameter to customize for the deployed dnsmasq instance. Set to one of the following valid values:

      • server
      • rev-server
      • srv-host
      • txt-record
      • ptr-record
      • rebind-domain-ok
      • naptr-record
      • cname
      • host-record
      • caa-record
      • dns-rr
      • auth-zone
      • synth-domain
      • no-negcache
      • local
    • values: Specifies the value for the DNS server reachable from the dnsmasq pod on the RHOCP cluster network. You can specify a generic DNS server as the value, for example, 1.1.1.1, or a DNS server for a specific domain, for example, /google.com/8.8.8.8.

      Note

      This DNS service, dnsmasq, provides DNS services for nodes on the RHOSO data plane. dnsmasq is different from the RHOSO DNS service (designate) that provides DNS as a service for cloud tenants.

    Identity service (keystone)
      keystone:
        apiOverride:
          route: {}
        template:
          override:
            service:
              internal:
                metadata:
                  annotations:
                    metallb.universe.tf/address-pool: internalapi
                    metallb.universe.tf/allow-shared-ip: internalapi
                    metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                spec:
                  type: LoadBalancer
          databaseInstance: openstack
          secret: osp-secret
          replicas: 3
    Image service (glance)
      glance:
        apiOverrides:
          default:
            route: {}
        template:
          databaseInstance: openstack
          storage:
            storageRequest: 10G
          secret: osp-secret
          keystoneEndpoint: default
          glanceAPIs:
            default:
              replicas: 0
              override:
                service:
                  internal:
                    metadata:
                      annotations:
                        metallb.universe.tf/address-pool: internalapi
                        metallb.universe.tf/allow-shared-ip: internalapi
                        metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                    spec:
                      type: LoadBalancer
              networkAttachments:
              - storage
    • replicas - Set to 0 to configure the back end; set to 3 when deploying service.

      You can deploy the initial control plane without activating the Image service (glance). To deploy the Image service, you must set the number of replicas for the service and configure the back end for the service. For information about the recommended replicas for the Image service and how to configure a back end for the service, see Configuring the Image service (glance) in Configuring persistent storage. If you do not deploy the Image service, you cannot upload images to the cloud or start an instance.

    Key Management service (barbican)
      barbican:
        apiOverride:
          route: {}
        template:
          databaseInstance: openstack
          secret: osp-secret
          barbicanAPI:
            replicas: 3
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
          barbicanWorker:
            replicas: 3
          barbicanKeystoneListener:
            replicas: 1
    Networking service (neutron)
      neutron:
        apiOverride:
          route: {}
        template:
          replicas: 3
          override:
            service:
              internal:
                metadata:
                  annotations:
                    metallb.universe.tf/address-pool: internalapi
                    metallb.universe.tf/allow-shared-ip: internalapi
                    metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                spec:
                  type: LoadBalancer
          databaseInstance: openstack
          secret: osp-secret
          networkAttachments:
          - internalapi
    Object Storage service (swift)
      swift:
        enabled: true
        proxyOverride:
          route: {}
        template:
          swiftProxy:
            networkAttachments:
            - storage
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
            replicas: 2
            secret: osp-secret
          swiftRing:
            ringReplicas: 3
          swiftStorage:
            networkAttachments:
            - storage
            replicas: 3
            storageRequest: 10Gi
    OVN
      ovn:
        template:
          ovnDBCluster:
            ovndbcluster-nb:
              replicas: 3
              dbType: NB
              storageRequest: 10G
              networkAttachment: internalapi
            ovndbcluster-sb:
              replcas: 3
              dbType: SB
              storageRequest: 10G
              networkAttachment: internalapi
          ovnNorthd: {}
    Placement service (placement)
      placement:
        apiOverride:
          route: {}
        template:
          override:
            service:
              internal:
                metadata:
                  annotations:
                    metallb.universe.tf/address-pool: internalapi
                    metallb.universe.tf/allow-shared-ip: internalapi
                    metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                spec:
                  type: LoadBalancer
          databaseInstance: openstack
          replicas: 3
          secret: osp-secret
    Telemetry service (ceilometer, prometheus)
      telemetry:
        enabled: true
        template:
          metricStorage:
            enabled: true
            dashboardsEnabled: true
            monitoringStack:
              alertingEnabled: true
              scrapeInterval: 30s
              storage:
                strategy: persistent
                retention: 24h
                persistent:
                  pvcStorageRequest: 20G
          autoscaling:
            enabled: false
            aodh:
              databaseAccount: aodh
              databaseInstance: openstack
              passwordSelector:
                aodhService: AodhPassword
              rabbitMqClusterName: rabbitmq
              serviceUser: aodh
              secret: osp-secret
            heatInstance: heat
          ceilometer:
            enabled: true
            secret: osp-secret
          logging:
            enabled: false
    • autoscaling - You must have the autoscaling field present, even if autoscaling is disabled. For more information about autoscaling, see Autoscaling for Instances.
  5. Add the following service configurations to implement high availability (HA):

    MariaDB Galera cluster

    A MariaDB Galera cluster for use by all RHOSO services (openstack), and a MariaDB Galera cluster for use by the Compute service for cell1 (openstack-cell1):

      galera:
        templates:
          openstack:
            storageRequest: 5000M
            secret: osp-secret
            replicas: 3
          openstack-cell1:
            storageRequest: 5000M
            secret: osp-secret
            replicas: 3
    memcached cluster

    A single memcached cluster that contains three memcached servers:

      memcached:
        templates:
          memcached:
             replicas: 3
    RabbitMQ cluster

    A RabbitMQ cluster for use by all RHOSO services (rabbitmq), and a RabbitMQ cluster for use by the Compute service for cell1 (rabbitmq-cell1):

      rabbitmq:
        templates:
          rabbitmq:
            replicas: 3
            override:
              service:
                metadata:
                  annotations:
                    metallb.universe.tf/address-pool: internalapi
                    metallb.universe.tf/loadBalancerIPs: 172.17.0.85
                spec:
                  type: LoadBalancer
          rabbitmq-cell1:
            replicas: 3
            override:
              service:
                metadata:
                  annotations:
                    metallb.universe.tf/address-pool: internalapi
                    metallb.universe.tf/loadBalancerIPs: 172.17.0.86
                spec:
                  type: LoadBalancer
    Note

    Multiple RabbitMQ instances cannot share the same VIP as they use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses.

  6. Create the control plane:

    $ oc create -f openstack_control_plane.yaml -n openstack
  7. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Sample output
    NAME 						STATUS 	MESSAGE
    openstack-control-plane 	Unknown 	Setup started

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

    Note

    Creating the control plane also creates an OpenStackClient pod that you can access through a remote shell (rsh) to run OpenStack CLI commands.

    $ oc rsh -n openstack openstackclient
  8. Optional: Confirm that the control plane is deployed by reviewing the pods in the openstack namespace:

    $ oc get pods -n openstack

    The control plane is deployed when all the pods are either completed or running.

Verification

  1. Open a remote shell connection to the OpenStackClient pod:

    $ oc rsh -n openstack openstackclient
  2. Confirm that the internal service endpoints are registered with each service:

    $ openstack endpoint list -c 'Service Name' -c Interface -c URL --service glance
    Sample output
    +--------------+-----------+---------------------------------------------------------------+
    | Service Name | Interface | URL                                                           |
    +--------------+-----------+---------------------------------------------------------------+
    | glance       | internal  | https://glance-internal.openstack.svc                     |
    | glance       | public    | https://glance-default-public-openstack.apps.ostest.test.metalkube.org |
    +--------------+-----------+---------------------------------------------------------------+
  3. Exit the OpenStackClient pod:

    $ exit

6.3. Example OpenStackControlPlane CR

The following example OpenStackControlPlane CR is a complete control plane configuration that includes all the key services that must always be enabled for a successful deployment.

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack-control-plane
  namespace: openstack
spec:
  secret: osp-secret
  storageClass: your-RHOCP-storage-class
  cinder:
    apiOverride:
      route: {}
    template:
      databaseInstance: openstack
      secret: osp-secret
      cinderAPI:
        replicas: 3
        override:
          service:
            internal:
              metadata:
                annotations:
                  metallb.universe.tf/address-pool: internalapi
                  metallb.universe.tf/allow-shared-ip: internalapi
                  metallb.universe.tf/loadBalancerIPs: 172.17.0.80
              spec:
                type: LoadBalancer
      cinderScheduler:
        replicas: 1
      cinderBackup:
        networkAttachments:
        - storage
        replicas: 0 # backend needs to be configured to activate the service
      cinderVolumes:
        volume1:
          networkAttachments:
          - storage
          replicas: 0 # backend needs to be configured to activate the service
  nova:
    apiOverride:
      route: {}
    template:
      apiServiceTemplate:
        replicas: 3
        override:
          service:
            internal:
              metadata:
                annotations:
                  metallb.universe.tf/address-pool: internalapi
                  metallb.universe.tf/allow-shared-ip: internalapi
                  metallb.universe.tf/loadBalancerIPs: 172.17.0.80
              spec:
                type: LoadBalancer
      metadataServiceTemplate:
        replicas: 3
        override:
          service:
            metadata:
              annotations:
                metallb.universe.tf/address-pool: internalapi
                metallb.universe.tf/allow-shared-ip: internalapi
                metallb.universe.tf/loadBalancerIPs: 172.17.0.80
            spec:
              type: LoadBalancer
      schedulerServiceTemplate:
        replicas: 3
      cellTemplates:
        cell0:
          cellDatabaseAccount: nova-cell0
          cellDatabaseInstance: openstack
          cellMessageBusInstance: rabbitmq
          hasAPIAccess: true
        cell1:
          cellDatabaseAccount: nova-cell1
          cellDatabaseInstance: openstack-cell1
          cellMessageBusInstance: rabbitmq-cell1
          noVNCProxyServiceTemplate:
            enabled: true
          hasAPIAccess: true
      secret: osp-secret
  dns:
    template:
      options:
      - key: server
        values:
        - 192.168.122.1
      - key: server
        values:
        - 192.168.122.2
      override:
        service:
          metadata:
            annotations:
              metallb.universe.tf/address-pool: ctlplane
              metallb.universe.tf/allow-shared-ip: ctlplane
              metallb.universe.tf/loadBalancerIPs: 192.168.122.80
          spec:
            type: LoadBalancer
      replicas: 2
  galera:
    templates:
      openstack:
        storageRequest: 5000M
        secret: osp-secret
        replicas: 3
      openstack-cell1:
        storageRequest: 5000M
        secret: osp-secret
        replicas: 3
  keystone:
    apiOverride:
      route: {}
    template:
      override:
        service:
          internal:
            metadata:
              annotations:
                metallb.universe.tf/address-pool: internalapi
                metallb.universe.tf/allow-shared-ip: internalapi
                metallb.universe.tf/loadBalancerIPs: 172.17.0.80
            spec:
              type: LoadBalancer
      databaseInstance: openstack
      secret: osp-secret
      replicas: 3
  glance:
    apiOverrides:
      default:
        route: {}
    template:
      databaseInstance: openstack
      storage:
        storageRequest: 10G
      secret: osp-secret
      keystoneEndpoint: default
      glanceAPIs:
        default:
          replicas: 0 # Configure back end; set to 3 when deploying service
          override:
            service:
              internal:
                metadata:
                  annotations:
                    metallb.universe.tf/address-pool: internalapi
                    metallb.universe.tf/allow-shared-ip: internalapi
                    metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                spec:
                  type: LoadBalancer
          networkAttachments:
          - storage
  barbican:
    apiOverride:
      route: {}
    template:
      databaseInstance: openstack
      secret: osp-secret
      barbicanAPI:
        replicas: 3
        override:
          service:
            internal:
              metadata:
                annotations:
                  metallb.universe.tf/address-pool: internalapi
                  metallb.universe.tf/allow-shared-ip: internalapi
                  metallb.universe.tf/loadBalancerIPs: 172.17.0.80
              spec:
                type: LoadBalancer
      barbicanWorker:
        replicas: 3
      barbicanKeystoneListener:
        replicas: 1
  memcached:
    templates:
      memcached:
         replicas: 3
  neutron:
    apiOverride:
      route: {}
    template:
      replicas: 3
      override:
        service:
          internal:
            metadata:
              annotations:
                metallb.universe.tf/address-pool: internalapi
                metallb.universe.tf/allow-shared-ip: internalapi
                metallb.universe.tf/loadBalancerIPs: 172.17.0.80
            spec:
              type: LoadBalancer
      databaseInstance: openstack
      secret: osp-secret
      networkAttachments:
      - internalapi
  swift:
    enabled: true
    proxyOverride:
      route: {}
    template:
      swiftProxy:
        networkAttachments:
        - storage
        override:
          service:
            internal:
              metadata:
                annotations:
                  metallb.universe.tf/address-pool: internalapi
                  metallb.universe.tf/allow-shared-ip: internalapi
                  metallb.universe.tf/loadBalancerIPs: 172.17.0.80
              spec:
                type: LoadBalancer
        replicas: 2
      swiftRing:
        ringReplicas: 3
      swiftStorage:
        networkAttachments:
        - storage
        replicas: 3
        storageRequest: 100Gi
  ovn:
    template:
      ovnDBCluster:
        ovndbcluster-nb:
          replicas: 3
          dbType: NB
          storageRequest: 10G
          networkAttachment: internalapi
        ovndbcluster-sb:
          replicas: 3
          dbType: SB
          storageRequest: 10G
          networkAttachment: internalapi
      ovnNorthd: {}
  placement:
    apiOverride:
      route: {}
    template:
      override:
        service:
          internal:
            metadata:
              annotations:
                metallb.universe.tf/address-pool: internalapi
                metallb.universe.tf/allow-shared-ip: internalapi
                metallb.universe.tf/loadBalancerIPs: 172.17.0.80
            spec:
              type: LoadBalancer
      databaseInstance: openstack
      replicas: 3
      secret: osp-secret
  rabbitmq:
    templates:
      rabbitmq:
        replicas: 3
        override:
          service:
            metadata:
              annotations:
                metallb.universe.tf/address-pool: internalapi
                metallb.universe.tf/loadBalancerIPs: 172.17.0.85 
1

            spec:
              type: LoadBalancer
      rabbitmq-cell1:
        replicas: 3
        override:
          service:
            metadata:
              annotations:
                metallb.universe.tf/address-pool: internalapi
                metallb.universe.tf/loadBalancerIPs: 172.17.0.86 
2

            spec:
              type: LoadBalancer
  telemetry:
    enabled: true
    template:
      metricStorage:
        enabled: true
        dashboardsEnabled: true
        monitoringStack:
          alertingEnabled: true
          scrapeInterval: 30s
          storage:
            strategy: persistent
            retention: 24h
            persistent:
              pvcStorageRequest: 20G
      autoscaling:
        enabled: false
        aodh:
          databaseAccount: aodh
          databaseInstance: openstack
          passwordSelector:
            aodhService: AodhPassword
          rabbitMqClusterName: rabbitmq
          serviceUser: aodh
          secret: osp-secret
        heatInstance: heat
      ceilometer:
        enabled: true
        secret: osp-secret
      logging:
        enabled: false
  • storageClass - The storage class that you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end.
  • cinder - Service-specific parameters for the Block Storage service (cinder).
  • cinderBackup - The Block Storage service back end. For more information on configuring storage services, see the Configuring persistent storage guide.
  • cinderVolumes - The Block Storage service configuration. For more information on configuring storage services, see the Configuring persistent storage guide.
  • networkAttachments - The list of networks that each service pod is directly attached to, specified by using the NetworkAttachmentDefinition resource names. A NIC is configured for the service for each specified network attachment.

    Note

    If you do not configure the isolated networks that each service pod is attached to, then the default pod network is used. For example, the Block Storage service uses the storage network to connect to a storage back end; the Identity service (keystone) uses an LDAP or Active Directory (AD) network; the ovnDBCluster service uses the internalapi network; and the ovnController service uses the tenant network.

  • nova - Service-specific parameters for the Compute service (nova).
  • apiOverride - Service API route definition. You can customize the service route by using route-specific annotations. For more information, see Route-specific annotations in the RHOCP Networking guide. Set route: to {} to apply the default route template.
  • metallb.universe.tf/address-pool: internalapi - The internal service API endpoint registered as a MetalLB service with the IPAddressPool internalapi.
  • metallb.universe.tf/loadBalancerIPs: 172.17.0.80 - The virtual IP (VIP) address for the service. The IP is shared with other services by default.
  • rabbitmq - The RabbitMQ instances exposed to an isolated network with distinct IP addresses defined in the loadBalancerIPs annotation, as indicated in 11 and 12.

    Note

    Multiple RabbitMQ instances cannot share the same VIP as they use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses.

  • metallb.universe.tf/loadBalancerIPs: 172.17.0.85 - The distinct IP address for a RabbitMQ instance that is exposed to an isolated network.
  • metallb.universe.tf/loadBalancerIPs: 172.17.0.86 - The distinct IP address for a RabbitMQ instance that is exposed to an isolated network.

6.4. Removing a service from the control plane

You can completely remove a service and the service database from the control plane after deployment by disabling the service. Many services are enabled by default, which means that the OpenStack Operator creates resources such as the service database and Identity service (keystone) users, even if no service pod is created because replicas is set to 0.

Warning

Remove a service with caution. Removing a service is not the same as stopping service pods. Removing a service is irreversible. Disabling a service removes the service database and any resources that referenced the service are no longer tracked. Create a backup of the service database before removing a service.

Procedure

  1. Open the OpenStackControlPlane CR file on your workstation.
  2. Locate the service you want to remove from the control plane and disable it:

      cinder:
        enabled: false
        apiOverride:
          route: {}
          ...
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  4. Wait until RHOCP removes the resource related to the disabled service. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    NAME 							STATUS 	MESSAGE
    openstack-control-plane 	Unknown 	Setup started

    The OpenStackControlPlane resource is updated with the disabled service when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

  5. Optional: Confirm that the pods from the disabled service are no longer listed by reviewing the pods in the openstack namespace:

    $ oc get pods -n openstack
  6. Check that the service is removed:

    $ oc get cinder -n openstack

    This command returns the following message when the service is successfully removed:

    No resources found in openstack namespace.
  7. Check that the API endpoints for the service are removed from the Identity service (keystone):

    $ oc rsh -n openstack openstackclient
    $ openstack endpoint list --service volumev3

    This command returns the following message when the API endpoints for the service are successfully removed:

    No service with a type, name or ID of 'volumev3' exists.

6.5. Additional resources

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top