Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 5. Creating the control plane


The Red Hat OpenStack Services on OpenShift (RHOSO) control plane contains the RHOSO services that manage the cloud. The RHOSO services run as a Red Hat OpenShift Container Platform (RHOCP) workload.

Note

Creating the control plane also creates an OpenStackClient pod that you can access through a remote shell (rsh) to run OpenStack CLI commands.

5.1. Prerequisites

  • The OpenStack Operator (openstack-operator) is installed. For more information, see Installing and preparing the Operators.
  • The RHOCP cluster is prepared for RHOSO networks. For more information, see Preparing RHOCP for RHOSO networks.
  • The RHOCP cluster is not configured with any network policies that prevent communication between the openstack-operators namespace and the control plane namespace (default openstack). Use the following command to check the existing network policies on the cluster:

    $ oc get networkpolicy -n openstack
    Copy to Clipboard Toggle word wrap

    This command returns the message "No resources found in openstack namespace" when there are no network policies. If this command returns a list of network policies, then check that they do not prevent communication between the openstack-operators namespace and the control plane namespace. For more information about network policies, see Network security in the RHOCP Networking guide.

  • You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges.

5.2. Creating the control plane

Define an OpenStackControlPlane custom resource (CR) to perform the following tasks:

  • Create the control plane.
  • Enable the Red Hat OpenStack Services on OpenShift (RHOSO) services.

The following procedure creates an initial control plane with the recommended configurations for each service. The procedure helps you quickly create an operating control plane environment that you can use to troubleshoot issues and test the environment before adding all the customizations you require. You can add service customizations to a deployed environment. For more information about how to customize your control plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide.

For an example OpenStackControlPlane CR, see Example OpenStackControlPlane CR.

Tip

Use the following commands to view the OpenStackControlPlane CRD definition and specification schema:

$ oc describe crd openstackcontrolplane

$ oc explain openstackcontrolplane.spec
Copy to Clipboard Toggle word wrap

Procedure

  1. Create a file on your workstation named openstack_control_plane.yaml to define the OpenStackControlPlane CR:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack-control-plane
      namespace: openstack
    Copy to Clipboard Toggle word wrap
  2. Specify the Secret CR you created to provide secure access to the RHOSO service pods in Providing secure access to the Red Hat OpenStack Services on OpenShift services:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack-control-plane
      namespace: openstack
    spec:
      secret: osp-secret
    Copy to Clipboard Toggle word wrap
  3. Specify the storageClass you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end:

    spec:
      secret: osp-secret
      storageClass: <RHOCP_storage_class>
    Copy to Clipboard Toggle word wrap
    • Replace <RHOCP_storage_class> with the storage class you created for your RHOCP cluster storage back end. For information about storage classes, see Creating a storage class.
  4. Add the following service configurations:

    Note
    • The following service examples use IP addresses from the default RHOSO MetalLB IPAddressPool range for the loadBalancerIPs field. Update the loadBalancerIPs field with the IP address from the MetalLB IPAddressPool range that you created.
    • You cannot override the default public service endpoint. The public service endpoints are exposed as RHOCP routes by default, because only routes are supported for public endpoints.
    • Block Storage service (cinder):

        cinder:
          apiOverride:
            route: {}
          template:
            databaseInstance: openstack
            secret: osp-secret
            cinderAPI:
              replicas: 3
              override:
                service:
                  internal:
                    metadata:
                      annotations:
                        metallb.universe.tf/address-pool: internalapi
                        metallb.universe.tf/allow-shared-ip: internalapi
                        metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                    spec:
                      type: LoadBalancer
            cinderScheduler:
              replicas: 1
            cinderBackup:
              networkAttachments:
              - storage
              replicas: 0
            cinderVolumes:
              volume1:
                networkAttachments:
                - storage
                replicas: 0
      Copy to Clipboard Toggle word wrap
      • cinderBackup.replicas: You can deploy the initial control plane without activating the cinderBackup service. To deploy the service, you must set the number of replicas for the service and configure the back end for the service. For information about the recommended replicas for each service and how to configure a back end for the Block Storage service and the backup service, see Configuring the Block Storage backup service in Configuring persistent storage.
      • cinderVolumes.replicas: You can deploy the initial control plane without activating the cinderVolumes service. To deploy the service, you must set the number of replicas for the service and configure the back end for the service. For information about the recommended replicas for the cinderVolumes service and how to configure a back end for the service, see Configuring the volume service in Configuring persistent storage.
    • Compute service (nova):

        nova:
          apiOverride:
            route: {}
          template:
            apiServiceTemplate:
              replicas: 3
              override:
                service:
                  internal:
                    metadata:
                      annotations:
                        metallb.universe.tf/address-pool: internalapi
                        metallb.universe.tf/allow-shared-ip: internalapi
                        metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                    spec:
                      type: LoadBalancer
            metadataServiceTemplate:
              replicas: 3
              override:
                service:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
            schedulerServiceTemplate:
              replicas: 3
            cellTemplates:
              cell0:
                cellDatabaseAccount: nova-cell0
                cellDatabaseInstance: openstack
                cellMessageBusInstance: rabbitmq
                hasAPIAccess: true
              cell1:
                cellDatabaseAccount: nova-cell1
                cellDatabaseInstance: openstack-cell1
                cellMessageBusInstance: rabbitmq-cell1
                noVNCProxyServiceTemplate:
                  enabled: true
                  networkAttachments:
                  - ctlplane
                hasAPIAccess: true
            secret: osp-secret
      Copy to Clipboard Toggle word wrap
      Note

      A full set of Compute services (nova) are deployed by default for each of the default cells, cell0 and cell1: nova-api, nova-metadata, nova-scheduler, and nova-conductor. The novncproxy service is also enabled for cell1 by default.

    • DNS service for the data plane:

        dns:
          template:
            options:
            - key: server
              values:
              - 192.168.122.1
            - key: server
              values:
              - 192.168.122.2
            override:
              service:
                metadata:
                  annotations:
                    metallb.universe.tf/address-pool: ctlplane
                    metallb.universe.tf/allow-shared-ip: ctlplane
                    metallb.universe.tf/loadBalancerIPs: 192.168.122.80
                spec:
                  type: LoadBalancer
            replicas: 2
      Copy to Clipboard Toggle word wrap
      • dns.template.options: Defines the dnsmasq instances required for each DNS server by using key-value pairs. In this example, there are two key-value pairs defined because there are two DNS servers configured to forward requests to.
      • dns.template.options.key: Specifies the dnsmasq parameter to customize for the deployed dnsmasq instance. Set to one of the following valid values:

        • server
        • rev-server
        • srv-host
        • txt-record
        • ptr-record
        • rebind-domain-ok
        • naptr-record
        • cname
        • host-record
        • caa-record
        • dns-rr
        • auth-zone
        • synth-domain
        • no-negcache
        • local
      • dns.template.options.values: Specifies the values for the dnsmasq parameter. You can specify a generic DNS server as the value, for example, 1.1.1.1, or a DNS server for a specific domain, for example, /google.com/8.8.8.8.

        Note

        This DNS service, dnsmasq, provides DNS services for nodes on the RHOSO data plane. dnsmasq is different from the RHOSO DNS service (designate) that provides DNS as a service for cloud tenants.

    • Identity service (keystone)

        keystone:
          apiOverride:
            route: {}
          template:
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
            databaseInstance: openstack
            secret: osp-secret
            replicas: 3
      Copy to Clipboard Toggle word wrap
    • Image service (glance):

        glance:
          apiOverrides:
            default:
              route: {}
          template:
            databaseInstance: openstack
            storage:
              storageRequest: 10G
            secret: osp-secret
            keystoneEndpoint: default
            glanceAPIs:
              default:
                replicas: 0 # Configure back end; set to 3 when deploying service
                override:
                  service:
                    internal:
                      metadata:
                        annotations:
                          metallb.universe.tf/address-pool: internalapi
                          metallb.universe.tf/allow-shared-ip: internalapi
                          metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                      spec:
                        type: LoadBalancer
                networkAttachments:
                - storage
      Copy to Clipboard Toggle word wrap
      • glanceAPIs.default.replicas: You can deploy the initial control plane without activating the Image service (glance). To deploy the Image service, you must set the number of replicas for the service and configure the back end for the service. For information about the recommended replicas for the Image service and how to configure a back end for the service, see Configuring the Image service (glance) in Configuring persistent storage. If you do not deploy the Image service, you cannot upload images to the cloud or start an instance.
    • Key Management service (barbican):

        barbican:
          apiOverride:
            route: {}
          template:
            databaseInstance: openstack
            secret: osp-secret
            barbicanAPI:
              replicas: 3
              override:
                service:
                  internal:
                    metadata:
                      annotations:
                        metallb.universe.tf/address-pool: internalapi
                        metallb.universe.tf/allow-shared-ip: internalapi
                        metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                    spec:
                      type: LoadBalancer
            barbicanWorker:
              replicas: 3
            barbicanKeystoneListener:
              replicas: 1
      Copy to Clipboard Toggle word wrap
    • Networking service (neutron):

        neutron:
          apiOverride:
            route: {}
          template:
            replicas: 3
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
            databaseInstance: openstack
            secret: osp-secret
            networkAttachments:
            - internalapi
      Copy to Clipboard Toggle word wrap
    • Object Storage service (swift):

        swift:
          enabled: true
          proxyOverride:
            route: {}
          template:
            swiftProxy:
              networkAttachments:
              - storage
              override:
                service:
                  internal:
                    metadata:
                      annotations:
                        metallb.universe.tf/address-pool: internalapi
                        metallb.universe.tf/allow-shared-ip: internalapi
                        metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                    spec:
                      type: LoadBalancer
              replicas: 2
              secret: osp-secret
            swiftRing:
              ringReplicas: 3
            swiftStorage:
              networkAttachments:
              - storage
              replicas: 3
              storageRequest: 10Gi
      Copy to Clipboard Toggle word wrap
    • OVN:

        ovn:
          template:
            ovnDBCluster:
              ovndbcluster-nb:
                replicas: 3
                dbType: NB
                storageRequest: 10G
                networkAttachment: internalapi
              ovndbcluster-sb:
                replcas: 3
                dbType: SB
                storageRequest: 10G
                networkAttachment: internalapi
            ovnNorthd: {}
      Copy to Clipboard Toggle word wrap
    • Placement service (placement):

        placement:
          apiOverride:
            route: {}
          template:
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
            databaseInstance: openstack
            replicas: 3
            secret: osp-secret
      Copy to Clipboard Toggle word wrap
    • Telemetry service (ceilometer, prometheus):

        telemetry:
          enabled: true
          template:
            metricStorage:
              enabled: true
              dashboardsEnabled: true
              dataplaneNetwork: ctlplane
              networkAttachments:
                - ctlplane
              monitoringStack:
                alertingEnabled: true
                scrapeInterval: 30s
                storage:
                  strategy: persistent
                  retention: 24h
                  persistent:
                    pvcStorageRequest: 20G
            autoscaling:
              enabled: false
              aodh:
                databaseAccount: aodh
                databaseInstance: openstack
                passwordSelector:
                  aodhService: AodhPassword
                rabbitMqClusterName: rabbitmq
                serviceUser: aodh
                secret: osp-secret
              heatInstance: heat
            ceilometer:
              enabled: true
              secret: osp-secret
            logging:
              enabled: false
      Copy to Clipboard Toggle word wrap
      • telemetry.template.metricStorage.dataplaneNetwork: Defines the network that you use to scrape dataplane node_exporter endpoints.
      • telemetry.template.metricStorage.networkAttachments: Lists the networks that each service pod is attached to by using the NetworkAttachmentDefinition resource names. You configure a NIC for the service for each network attachment that you specify. If you do not configure the isolated networks that each service pod is attached to, then the default pod network is used. You must create a networkAttachment that matches the network that you specify as the dataplaneNetwork, so that Prometheus can scrape data from the dataplane nodes.
      • telemetry.template.autoscaling: You must have the autoscaling field present, even if autoscaling is disabled. For more information about autoscaling, see Autoscaling for Instances.
  5. Add the following service configurations to implement high availability (HA):

    • A MariaDB Galera cluster for use by all RHOSO services (openstack), and a MariaDB Galera cluster for use by the Compute service for cell1 (openstack-cell1):

        galera:
          templates:
            openstack:
              storageRequest: 5000M
              secret: osp-secret
              replicas: 3
            openstack-cell1:
              storageRequest: 5000M
              secret: osp-secret
              replicas: 3
      Copy to Clipboard Toggle word wrap
    • A single memcached cluster that contains three memcached servers:

        memcached:
          templates:
            memcached:
               replicas: 3
      Copy to Clipboard Toggle word wrap
    • A RabbitMQ cluster for use by all RHOSO services (rabbitmq), and a RabbitMQ cluster for use by the Compute service for cell1 (rabbitmq-cell1):

        rabbitmq:
          templates:
            rabbitmq:
              replicas: 3
              override:
                service:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.85
                  spec:
                    type: LoadBalancer
            rabbitmq-cell1:
              replicas: 3
              override:
                service:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.86
                  spec:
                    type: LoadBalancer
      Copy to Clipboard Toggle word wrap
      Note

      You cannot configure multiple RabbitMQ instances on the same virtual IP (VIP) address because all RabbitMQ instances use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses.

  6. Create the control plane:

    $ oc create -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  7. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    NAME 						STATUS 	MESSAGE
    openstack-control-plane 	Unknown 	Setup started
    Copy to Clipboard Toggle word wrap

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

    Note

    Creating the control plane also creates an OpenStackClient pod that you can access through a remote shell (rsh) to run OpenStack CLI commands.

    $ oc rsh -n openstack openstackclient
    Copy to Clipboard Toggle word wrap
  8. Optional: Confirm that the control plane is deployed by reviewing the pods in the openstack namespace:

    $ oc get pods -n openstack
    Copy to Clipboard Toggle word wrap

    The control plane is deployed when all the pods are either completed or running.

Verification

  1. Open a remote shell connection to the OpenStackClient pod:

    $ oc rsh -n openstack openstackclient
    Copy to Clipboard Toggle word wrap
  2. Confirm that the internal service endpoints are registered with each service:

    $ openstack endpoint list -c 'Service Name' -c Interface -c URL --service glance
    +--------------+-----------+---------------------------------------------------------------+
    | Service Name | Interface | URL                                                           |
    +--------------+-----------+---------------------------------------------------------------+
    | glance       | internal  | https://glance-internal.openstack.svc                     |
    | glance       | public    | https://glance-default-public-openstack.apps.ostest.test.metalkube.org |
    +--------------+-----------+---------------------------------------------------------------+
    Copy to Clipboard Toggle word wrap
  3. Exit the OpenStackClient pod:

    $ exit
    Copy to Clipboard Toggle word wrap

5.3. Example OpenStackControlPlane CR

The following example OpenStackControlPlane CR is a complete control plane configuration that includes all the key services that must always be enabled for a successful deployment.

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack-control-plane
  namespace: openstack
spec:
  secret: osp-secret
  storageClass: your-RHOCP-storage-class
  cinder:
    apiOverride:
      route: {}
    template:
      databaseInstance: openstack
      secret: osp-secret
      cinderAPI:
        replicas: 3
        override:
          service:
            internal:
              metadata:
                annotations:
                  metallb.universe.tf/address-pool: internalapi
                  metallb.universe.tf/allow-shared-ip: internalapi
                  metallb.universe.tf/loadBalancerIPs: 172.17.0.80
              spec:
                type: LoadBalancer
      cinderScheduler:
        replicas: 1
      cinderBackup:
        networkAttachments:
        - storage
        replicas: 0 # backend needs to be configured to activate the service
      cinderVolumes:
        volume1:
          networkAttachments:
          - storage
          replicas: 0 # backend needs to be configured to activate the service
  nova:
    apiOverride:
      route: {}
    template:
      apiServiceTemplate:
        replicas: 3
        override:
          service:
            internal:
              metadata:
                annotations:
                  metallb.universe.tf/address-pool: internalapi
                  metallb.universe.tf/allow-shared-ip: internalapi
                  metallb.universe.tf/loadBalancerIPs: 172.17.0.80
              spec:
                type: LoadBalancer
      metadataServiceTemplate:
        replicas: 3
        override:
          service:
            metadata:
              annotations:
                metallb.universe.tf/address-pool: internalapi
                metallb.universe.tf/allow-shared-ip: internalapi
                metallb.universe.tf/loadBalancerIPs: 172.17.0.80
            spec:
              type: LoadBalancer
      schedulerServiceTemplate:
        replicas: 3
      cellTemplates:
        cell0:
          cellDatabaseAccount: nova-cell0
          cellDatabaseInstance: openstack
          cellMessageBusInstance: rabbitmq
          hasAPIAccess: true
        cell1:
          cellDatabaseAccount: nova-cell1
          cellDatabaseInstance: openstack-cell1
          cellMessageBusInstance: rabbitmq-cell1
          noVNCProxyServiceTemplate:
            enabled: true
            networkAttachments:
            - ctlplane
          hasAPIAccess: true
      secret: osp-secret
  dns:
    template:
      options:
      - key: server
        values:
        - 192.168.122.1
      - key: server
        values:
        - 192.168.122.2
      override:
        service:
          metadata:
            annotations:
              metallb.universe.tf/address-pool: ctlplane
              metallb.universe.tf/allow-shared-ip: ctlplane
              metallb.universe.tf/loadBalancerIPs: 192.168.122.80
          spec:
            type: LoadBalancer
      replicas: 2
  galera:
    templates:
      openstack:
        storageRequest: 5000M
        secret: osp-secret
        replicas: 3
      openstack-cell1:
        storageRequest: 5000M
        secret: osp-secret
        replicas: 3
  keystone:
    apiOverride:
      route: {}
    template:
      override:
        service:
          internal:
            metadata:
              annotations:
                metallb.universe.tf/address-pool: internalapi
                metallb.universe.tf/allow-shared-ip: internalapi
                metallb.universe.tf/loadBalancerIPs: 172.17.0.80
            spec:
              type: LoadBalancer
      databaseInstance: openstack
      secret: osp-secret
      replicas: 3
  glance:
    apiOverrides:
      default:
        route: {}
    template:
      databaseInstance: openstack
      storage:
        storageRequest: 10G
      secret: osp-secret
      keystoneEndpoint: default
      glanceAPIs:
        default:
          replicas: 0 # Configure back end; set to 3 when deploying service
          override:
            service:
              internal:
                metadata:
                  annotations:
                    metallb.universe.tf/address-pool: internalapi
                    metallb.universe.tf/allow-shared-ip: internalapi
                    metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                spec:
                  type: LoadBalancer
          networkAttachments:
          - storage
  barbican:
    apiOverride:
      route: {}
    template:
      databaseInstance: openstack
      secret: osp-secret
      barbicanAPI:
        replicas: 3
        override:
          service:
            internal:
              metadata:
                annotations:
                  metallb.universe.tf/address-pool: internalapi
                  metallb.universe.tf/allow-shared-ip: internalapi
                  metallb.universe.tf/loadBalancerIPs: 172.17.0.80
              spec:
                type: LoadBalancer
      barbicanWorker:
        replicas: 3
      barbicanKeystoneListener:
        replicas: 1
  memcached:
    templates:
      memcached:
         replicas: 3
  neutron:
    apiOverride:
      route: {}
    template:
      replicas: 3
      override:
        service:
          internal:
            metadata:
              annotations:
                metallb.universe.tf/address-pool: internalapi
                metallb.universe.tf/allow-shared-ip: internalapi
                metallb.universe.tf/loadBalancerIPs: 172.17.0.80
            spec:
              type: LoadBalancer
      databaseInstance: openstack
      secret: osp-secret
      networkAttachments:
      - internalapi
  swift:
    enabled: true
    proxyOverride:
      route: {}
    template:
      swiftProxy:
        networkAttachments:
        - storage
        override:
          service:
            internal:
              metadata:
                annotations:
                  metallb.universe.tf/address-pool: internalapi
                  metallb.universe.tf/allow-shared-ip: internalapi
                  metallb.universe.tf/loadBalancerIPs: 172.17.0.80
              spec:
                type: LoadBalancer
        replicas: 1
      swiftRing:
        ringReplicas: 1
      swiftStorage:
        networkAttachments:
        - storage
        replicas: 1
        storageRequest: 10Gi
  ovn:
    template:
      ovnDBCluster:
        ovndbcluster-nb:
          replicas: 3
          dbType: NB
          storageRequest: 10G
          networkAttachment: internalapi
        ovndbcluster-sb:
          replicas: 3
          dbType: SB
          storageRequest: 10G
          networkAttachment: internalapi
      ovnNorthd: {}
  placement:
    apiOverride:
      route: {}
    template:
      override:
        service:
          internal:
            metadata:
              annotations:
                metallb.universe.tf/address-pool: internalapi
                metallb.universe.tf/allow-shared-ip: internalapi
                metallb.universe.tf/loadBalancerIPs: 172.17.0.80
            spec:
              type: LoadBalancer
      databaseInstance: openstack
      replicas: 3
      secret: osp-secret
  rabbitmq:
    templates:
      rabbitmq:
        replicas: 3
        override:
          service:
            metadata:
              annotations:
                metallb.universe.tf/address-pool: internalapi
                metallb.universe.tf/loadBalancerIPs: 172.17.0.85
            spec:
              type: LoadBalancer
      rabbitmq-cell1:
        replicas: 3
        override:
          service:
            metadata:
              annotations:
                metallb.universe.tf/address-pool: internalapi
                metallb.universe.tf/loadBalancerIPs: 172.17.0.86
            spec:
              type: LoadBalancer
  telemetry:
    enabled: true
    template:
      metricStorage:
        enabled: true
        dashboardsEnabled: true
        dataplaneNetwork: ctlplane
        networkAttachments:
          - ctlplane
        monitoringStack:
          alertingEnabled: true
          scrapeInterval: 30s
          storage:
            strategy: persistent
            retention: 24h
            persistent:
              pvcStorageRequest: 20G
      autoscaling:
        enabled: false
        aodh:
          databaseAccount: aodh
          databaseInstance: openstack
          passwordSelector:
            aodhService: AodhPassword
          rabbitMqClusterName: rabbitmq
          serviceUser: aodh
          secret: osp-secret
        heatInstance: heat
      ceilometer:
        enabled: true
        secret: osp-secret
      logging:
        enabled: false
Copy to Clipboard Toggle word wrap
  • spec.storageClass: The storage class that you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end.
  • spec.cinder: Service-specific parameters for the Block Storage service (cinder).
  • spec.cinder.template.cinderBackup: The Block Storage service back end. For more information on configuring storage services, see the Configuring persistent storage guide.
  • spec.cinder.template.cinderVolumes: The Block Storage service configuration. For more information on configuring storage services, see the Configuring persistent storage guide.
  • spec.cinder.template.cinderVolumes.networkAttachments: The list of networks that each service pod is directly attached to, specified by using the NetworkAttachmentDefinition resource names. A NIC is configured for the service for each specified network attachment.

    Note

    If you do not configure the isolated networks that each service pod is attached to, then the default pod network is used. For example, the Block Storage service uses the storage network to connect to a storage back end; the Identity service (keystone) uses an LDAP or Active Directory (AD) network; the ovnDBCluster service uses the internalapi network; and the ovnController service uses the tenant network.

  • spec.nova: Service-specific parameters for the Compute service (nova).
  • spec.nova.apiOverride: Service API route definition. You can customize the service route by using route-specific annotations. For more information, see Route-specific annotations in the RHOCP Networking guide. Set route: to {} to apply the default route template.
  • metallb.universe.tf/address-pool: The internal service API endpoint registered as a MetalLB service with the IPAddressPool internalapi.
  • metallb.universe.tf/loadBalancerIPs: The virtual IP (VIP) address for the service. The IP is shared with other services by default.
  • spec.rabbitmq: The RabbitMQ instances exposed to an isolated network with distinct IP addresses defined in the loadBalancerIPs annotation, as indicated in 11 and 12.

    Note

    You cannot configure multiple RabbitMQ instances on the same virtual IP (VIP) address because all RabbitMQ instances use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses.

  • rabbitmq.override.service.metadata.annotations.metallb.universe.tf/loadBalancerIPs: The distinct IP address for a RabbitMQ instance that is exposed to an isolated network.

5.4. Removing a service from the control plane

You can completely remove a service and the service database from the control plane after deployment by disabling the service. Many services are enabled by default, which means that the OpenStack Operator creates resources such as the service database and Identity service (keystone) users, even if no service pod is created because replicas is set to 0.

Warning

Remove a service with caution. Removing a service is not the same as stopping service pods. Removing a service is irreversible. Disabling a service removes the service database and any resources that referenced the service are no longer tracked. Create a backup of the service database before removing a service.

Procedure

  1. Open the OpenStackControlPlane CR file on your workstation.
  2. Locate the service you want to remove from the control plane and disable it:

      cinder:
        enabled: false
        apiOverride:
          route: {}
          ...
    Copy to Clipboard Toggle word wrap
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  4. Wait until RHOCP removes the resource related to the disabled service. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    NAME 							STATUS 	MESSAGE
    openstack-control-plane 	Unknown 	Setup started
    Copy to Clipboard Toggle word wrap

    The OpenStackControlPlane resource is updated with the disabled service when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

  5. Optional: Confirm that the pods from the disabled service are no longer listed by reviewing the pods in the openstack namespace:

    $ oc get pods -n openstack
    Copy to Clipboard Toggle word wrap
  6. Check that the service is removed:

    $ oc get cinder -n openstack
    Copy to Clipboard Toggle word wrap

    This command returns the following message when the service is successfully removed:

    No resources found in openstack namespace.
    Copy to Clipboard Toggle word wrap
  7. Check that the API endpoints for the service are removed from the Identity service (keystone):

    $ oc rsh -n openstack openstackclient
    $ openstack endpoint list --service volumev3
    Copy to Clipboard Toggle word wrap

    This command returns the following message when the API endpoints for the service are successfully removed:

    No service with a type, name or ID of 'volumev3' exists.
    Copy to Clipboard Toggle word wrap

5.5. Additional resources

Nach oben
Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2025 Red Hat